提交 6daa9043 编写于 作者: L Linus Torvalds

Merge tag 'dmaengine-5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine

Pull dmaengine updates from Vinod Koul:
 "The last dmaengine updates for this year :)

  This contains couple of new drivers, new device support and updates to
  bunch of drivers.

  New drivers/devices:
   - Qualcomm ADM driver
   - Qualcomm GPI driver
   - Allwinner A100 DMA support
   - Microchip Sama7g5 support
   - Mediatek MT8516 apdma

  Updates:
   - more updates to idxd driver and support for IAX config
   - runtime PM support for dw driver
   - TI drivers"

* tag 'dmaengine-5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (75 commits)
  soc: ti: k3-ringacc: Use correct error casting in k3_ringacc_dmarings_init
  dmaengine: ti: k3-udma-glue: Add support for K3 PKTDMA
  dmaengine: ti: k3-udma: Initial support for K3 PKTDMA
  dmaengine: ti: k3-udma: Add support for BCDMA channel TPL handling
  dmaengine: ti: k3-udma: Initial support for K3 BCDMA
  soc: ti: k3-ringacc: add AM64 DMA rings support.
  dmaengine: ti: Add support for k3 event routers
  dmaengine: ti: k3-psil: Add initial map for AM64
  dmaengine: ti: k3-psil: Extend psil_endpoint_config for K3 PKTDMA
  dt-bindings: dma: ti: Add document for K3 PKTDMA
  dt-bindings: dma: ti: Add document for K3 BCDMA
  dmaengine: dmatest: Use dmaengine_get_dma_device
  dmaengine: doc: client: Update for dmaengine_get_dma_device() usage
  dmaengine: Add support for per channel coherency handling
  dmaengine: of-dma: Add support for optional router configuration callback
  dmaengine: ti: k3-udma-glue: Configure the dma_dev for rings
  dmaengine: ti: k3-udma-glue: Get the ringacc from udma_dev
  dmaengine: ti: k3-udma-glue: Add function to get device pointer for DMA API
  dmaengine: ti: k3-udma: Add support for second resource range from sysfw
  dmaengine: ti: k3-udma: Wait for peer teardown completion if supported
  ...
...@@ -77,6 +77,13 @@ Contact: dmaengine@vger.kernel.org ...@@ -77,6 +77,13 @@ Contact: dmaengine@vger.kernel.org
Description: The operation capability bit mask specify the operation types Description: The operation capability bit mask specify the operation types
supported by the this device. supported by the this device.
What: /sys/bus/dsa/devices/dsa<m>/pasid_enabled
Date: Oct 27, 2020
KernelVersion: 5.11.0
Contact: dmaengine@vger.kernel.org
Description: To indicate if PASID (process address space identifier) is
enabled or not for this device.
What: /sys/bus/dsa/devices/dsa<m>/state What: /sys/bus/dsa/devices/dsa<m>/state
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
...@@ -122,6 +129,13 @@ KernelVersion: 5.10.0 ...@@ -122,6 +129,13 @@ KernelVersion: 5.10.0
Contact: dmaengine@vger.kernel.org Contact: dmaengine@vger.kernel.org
Description: The last executed device administrative command's status/error. Description: The last executed device administrative command's status/error.
What: /sys/bus/dsa/devices/wq<m>.<n>/block_on_fault
Date: Oct 27, 2020
KernelVersion: 5.11.0
Contact: dmaengine@vger.kernel.org
Description: To indicate block on fault is allowed or not for the work queue
to support on demand paging.
What: /sys/bus/dsa/devices/wq<m>.<n>/group_id What: /sys/bus/dsa/devices/wq<m>.<n>/group_id
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
...@@ -190,6 +204,13 @@ Contact: dmaengine@vger.kernel.org ...@@ -190,6 +204,13 @@ Contact: dmaengine@vger.kernel.org
Description: The max batch size for this workqueue. Cannot exceed device Description: The max batch size for this workqueue. Cannot exceed device
max batch size. Configurable parameter. max batch size. Configurable parameter.
What: /sys/bus/dsa/devices/wq<m>.<n>/ats_disable
Date: Nov 13, 2020
KernelVersion: 5.11.0
Contact: dmaengine@vger.kernel.org
Description: Indicate whether ATS disable is turned on for the workqueue.
0 indicates ATS is on, and 1 indicates ATS is off for the workqueue.
What: /sys/bus/dsa/devices/engine<m>.<n>/group_id What: /sys/bus/dsa/devices/engine<m>.<n>/group_id
Date: Oct 25, 2019 Date: Oct 25, 2019
KernelVersion: 5.6.0 KernelVersion: 5.6.0
......
...@@ -21,6 +21,7 @@ properties: ...@@ -21,6 +21,7 @@ properties:
compatible: compatible:
oneOf: oneOf:
- const: allwinner,sun50i-a64-dma - const: allwinner,sun50i-a64-dma
- const: allwinner,sun50i-a100-dma
- const: allwinner,sun50i-h6-dma - const: allwinner,sun50i-h6-dma
- items: - items:
- const: allwinner,sun8i-r40-dma - const: allwinner,sun8i-r40-dma
...@@ -56,7 +57,9 @@ required: ...@@ -56,7 +57,9 @@ required:
if: if:
properties: properties:
compatible: compatible:
const: allwinner,sun50i-h6-dma enum:
- allwinner,sun50i-a100-dma
- allwinner,sun50i-h6-dma
then: then:
properties: properties:
......
...@@ -2,7 +2,8 @@ ...@@ -2,7 +2,8 @@
* XDMA Controller * XDMA Controller
Required properties: Required properties:
- compatible: Should be "atmel,sama5d4-dma" or "microchip,sam9x60-dma". - compatible: Should be "atmel,sama5d4-dma", "microchip,sam9x60-dma" or
"microchip,sama7g5-dma".
- reg: Should contain DMA registers location and length. - reg: Should contain DMA registers location and length.
- interrupts: Should contain DMA interrupt. - interrupts: Should contain DMA interrupt.
- #dma-cells: Must be <1>, used to represent the number of integer cells in - #dma-cells: Must be <1>, used to represent the number of integer cells in
......
...@@ -4,6 +4,7 @@ Required properties: ...@@ -4,6 +4,7 @@ Required properties:
- compatible should contain: - compatible should contain:
* "mediatek,mt2712-uart-dma" for MT2712 compatible APDMA * "mediatek,mt2712-uart-dma" for MT2712 compatible APDMA
* "mediatek,mt6577-uart-dma" for MT6577 and all of the above * "mediatek,mt6577-uart-dma" for MT6577 and all of the above
* "mediatek,mt8516-uart-dma", "mediatek,mt6577" for MT8516 SoC
- reg: The base address of the APDMA register bank. - reg: The base address of the APDMA register bank.
......
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/dma/qcom,gpi.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Technologies Inc GPI DMA controller
maintainers:
- Vinod Koul <vkoul@kernel.org>
description: |
QCOM GPI DMA controller provides DMA capabilities for
peripheral buses such as I2C, UART, and SPI.
allOf:
- $ref: "dma-controller.yaml#"
properties:
compatible:
enum:
- qcom,sdm845-gpi-dma
reg:
maxItems: 1
interrupts:
description:
Interrupt lines for each GPI instance
maxItems: 13
"#dma-cells":
const: 3
description: >
DMA clients must use the format described in dma.txt, giving a phandle
to the DMA controller plus the following 3 integer cells:
- channel: if set to 0xffffffff, any available channel will be allocated
for the client. Otherwise, the exact channel specified will be used.
- seid: serial id of the client as defined in the SoC documentation.
- client: type of the client as defined in dt-bindings/dma/qcom-gpi.h
iommus:
maxItems: 1
dma-channels:
maximum: 31
dma-channel-mask:
maxItems: 1
required:
- compatible
- reg
- interrupts
- "#dma-cells"
- iommus
- dma-channels
- dma-channel-mask
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/dma/qcom-gpi.h>
gpi_dma0: dma-controller@800000 {
compatible = "qcom,gpi-dma";
#dma-cells = <3>;
reg = <0x00800000 0x60000>;
iommus = <&apps_smmu 0x0016 0x0>;
dma-channels = <13>;
dma-channel-mask = <0xfa>;
interrupts = <GIC_SPI 244 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 245 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 246 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 247 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 248 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 249 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 250 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 251 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 252 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 253 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 254 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 255 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 256 IRQ_TYPE_LEVEL_HIGH>;
};
...
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/dma/ti/k3-bcdma.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Texas Instruments K3 DMSS BCDMA Device Tree Bindings
maintainers:
- Peter Ujfalusi <peter.ujfalusi@ti.com>
description: |
The Block Copy DMA (BCDMA) is intended to perform similar functions as the TR
mode channels of K3 UDMA-P.
BCDMA includes block copy channels and Split channels.
Block copy channels mainly used for memory to memory transfers, but with
optional triggers a block copy channel can service peripherals by accessing
directly to memory mapped registers or area.
Split channels can be used to service PSI-L based peripherals.
The peripherals can be PSI-L native or legacy, non PSI-L native peripherals
with PDMAs. PDMA is tasked to act as a bridge between the PSI-L fabric and the
legacy peripheral.
PDMAs can be configured via BCDMA split channel's peer registers to match with
the configuration of the legacy peripheral.
allOf:
- $ref: /schemas/dma/dma-controller.yaml#
properties:
compatible:
const: ti,am64-dmss-bcdma
"#dma-cells":
const: 3
description: |
cell 1: type of the BCDMA channel to be used to service the peripheral:
0 - split channel
1 - block copy channel using global trigger 1
2 - block copy channel using global trigger 2
3 - block copy channel using local trigger
cell 2: parameter for the channel:
if cell 1 is 0 (split channel):
PSI-L thread ID of the remote (to BCDMA) end.
Valid ranges for thread ID depends on the data movement direction:
for source thread IDs (rx): 0 - 0x7fff
for destination thread IDs (tx): 0x8000 - 0xffff
Please refer to the device documentation for the PSI-L thread map and
also the PSI-L peripheral chapter for the correct thread ID.
if cell 1 is 1 or 2 (block copy channel using global trigger):
Unused, ignored
The trigger must be configured for the channel externally to BCDMA,
channels using global triggers should not be requested directly, but
via DMA event router.
if cell 1 is 3 (block copy channel using local trigger):
bchan number of the locally triggered channel
cell 3: ASEL value for the channel
reg:
maxItems: 5
reg-names:
items:
- const: gcfg
- const: bchanrt
- const: rchanrt
- const: tchanrt
- const: ringrt
msi-parent: true
ti,asel:
$ref: /schemas/types.yaml#/definitions/uint32
description: ASEL value for non slave channels
ti,sci-rm-range-bchan:
$ref: /schemas/types.yaml#/definitions/uint32-array
description: |
Array of BCDMA block-copy channel resource subtypes for resource
allocation for this host
minItems: 1
# Should be enough
maxItems: 255
items:
maximum: 0x3f
ti,sci-rm-range-tchan:
$ref: /schemas/types.yaml#/definitions/uint32-array
description: |
Array of BCDMA split tx channel resource subtypes for resource allocation
for this host
minItems: 1
# Should be enough
maxItems: 255
items:
maximum: 0x3f
ti,sci-rm-range-rchan:
$ref: /schemas/types.yaml#/definitions/uint32-array
description: |
Array of BCDMA split rx channel resource subtypes for resource allocation
for this host
minItems: 1
# Should be enough
maxItems: 255
items:
maximum: 0x3f
required:
- compatible
- "#dma-cells"
- reg
- reg-names
- msi-parent
- ti,sci
- ti,sci-dev-id
- ti,sci-rm-range-bchan
- ti,sci-rm-range-tchan
- ti,sci-rm-range-rchan
unevaluatedProperties: false
examples:
- |+
cbass_main {
#address-cells = <2>;
#size-cells = <2>;
main_dmss {
compatible = "simple-mfd";
#address-cells = <2>;
#size-cells = <2>;
dma-ranges;
ranges;
ti,sci-dev-id = <25>;
main_bcdma: dma-controller@485c0100 {
compatible = "ti,am64-dmss-bcdma";
reg = <0x0 0x485c0100 0x0 0x100>,
<0x0 0x4c000000 0x0 0x20000>,
<0x0 0x4a820000 0x0 0x20000>,
<0x0 0x4aa40000 0x0 0x20000>,
<0x0 0x4bc00000 0x0 0x100000>;
reg-names = "gcfg", "bchanrt", "rchanrt", "tchanrt", "ringrt";
msi-parent = <&inta_main_dmss>;
#dma-cells = <3>;
ti,sci = <&dmsc>;
ti,sci-dev-id = <26>;
ti,sci-rm-range-bchan = <0x20>; /* BLOCK_COPY_CHAN */
ti,sci-rm-range-rchan = <0x21>; /* SPLIT_TR_RX_CHAN */
ti,sci-rm-range-tchan = <0x22>; /* SPLIT_TR_TX_CHAN */
};
};
};
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/dma/ti/k3-pktdma.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Texas Instruments K3 DMSS PKTDMA Device Tree Bindings
maintainers:
- Peter Ujfalusi <peter.ujfalusi@ti.com>
description: |
The Packet DMA (PKTDMA) is intended to perform similar functions as the packet
mode channels of K3 UDMA-P.
PKTDMA only includes Split channels to service PSI-L based peripherals.
The peripherals can be PSI-L native or legacy, non PSI-L native peripherals
with PDMAs. PDMA is tasked to act as a bridge between the PSI-L fabric and the
legacy peripheral.
PDMAs can be configured via PKTDMA split channel's peer registers to match
with the configuration of the legacy peripheral.
allOf:
- $ref: /schemas/dma/dma-controller.yaml#
properties:
compatible:
const: ti,am64-dmss-pktdma
"#dma-cells":
const: 2
description: |
The first cell is the PSI-L thread ID of the remote (to PKTDMA) end.
Valid ranges for thread ID depends on the data movement direction:
for source thread IDs (rx): 0 - 0x7fff
for destination thread IDs (tx): 0x8000 - 0xffff
Please refer to the device documentation for the PSI-L thread map and also
the PSI-L peripheral chapter for the correct thread ID.
The second cell is the ASEL value for the channel
reg:
maxItems: 4
reg-names:
items:
- const: gcfg
- const: rchanrt
- const: tchanrt
- const: ringrt
msi-parent: true
ti,sci-rm-range-tchan:
$ref: /schemas/types.yaml#/definitions/uint32-array
description: |
Array of PKTDMA split tx channel resource subtypes for resource allocation
for this host
minItems: 1
# Should be enough
maxItems: 255
items:
maximum: 0x3f
ti,sci-rm-range-tflow:
$ref: /schemas/types.yaml#/definitions/uint32-array
description: |
Array of PKTDMA split tx flow resource subtypes for resource allocation
for this host
minItems: 1
# Should be enough
maxItems: 255
items:
maximum: 0x3f
ti,sci-rm-range-rchan:
$ref: /schemas/types.yaml#/definitions/uint32-array
description: |
Array of PKTDMA split rx channel resource subtypes for resource allocation
for this host
minItems: 1
# Should be enough
maxItems: 255
items:
maximum: 0x3f
ti,sci-rm-range-rflow:
$ref: /schemas/types.yaml#/definitions/uint32-array
description: |
Array of PKTDMA split rx flow resource subtypes for resource allocation
for this host
minItems: 1
# Should be enough
maxItems: 255
items:
maximum: 0x3f
required:
- compatible
- "#dma-cells"
- reg
- reg-names
- msi-parent
- ti,sci
- ti,sci-dev-id
- ti,sci-rm-range-tchan
- ti,sci-rm-range-tflow
- ti,sci-rm-range-rchan
- ti,sci-rm-range-rflow
unevaluatedProperties: false
examples:
- |+
cbass_main {
#address-cells = <2>;
#size-cells = <2>;
main_dmss {
compatible = "simple-mfd";
#address-cells = <2>;
#size-cells = <2>;
dma-ranges;
ranges;
ti,sci-dev-id = <25>;
main_pktdma: dma-controller@485c0000 {
compatible = "ti,am64-dmss-pktdma";
reg = <0x0 0x485c0000 0x0 0x100>,
<0x0 0x4a800000 0x0 0x20000>,
<0x0 0x4aa00000 0x0 0x40000>,
<0x0 0x4b800000 0x0 0x400000>;
reg-names = "gcfg", "rchanrt", "tchanrt", "ringrt";
msi-parent = <&inta_main_dmss>;
#dma-cells = <2>;
ti,sci = <&dmsc>;
ti,sci-dev-id = <30>;
ti,sci-rm-range-tchan = <0x23>, /* UNMAPPED_TX_CHAN */
<0x24>, /* CPSW_TX_CHAN */
<0x25>, /* SAUL_TX_0_CHAN */
<0x26>, /* SAUL_TX_1_CHAN */
<0x27>, /* ICSSG_0_TX_CHAN */
<0x28>; /* ICSSG_1_TX_CHAN */
ti,sci-rm-range-tflow = <0x10>, /* RING_UNMAPPED_TX_CHAN */
<0x11>, /* RING_CPSW_TX_CHAN */
<0x12>, /* RING_SAUL_TX_0_CHAN */
<0x13>, /* RING_SAUL_TX_1_CHAN */
<0x14>, /* RING_ICSSG_0_TX_CHAN */
<0x15>; /* RING_ICSSG_1_TX_CHAN */
ti,sci-rm-range-rchan = <0x29>, /* UNMAPPED_RX_CHAN */
<0x2b>, /* CPSW_RX_CHAN */
<0x2d>, /* SAUL_RX_0_CHAN */
<0x2f>, /* SAUL_RX_1_CHAN */
<0x31>, /* SAUL_RX_2_CHAN */
<0x33>, /* SAUL_RX_3_CHAN */
<0x35>, /* ICSSG_0_RX_CHAN */
<0x37>; /* ICSSG_1_RX_CHAN */
ti,sci-rm-range-rflow = <0x2a>, /* FLOW_UNMAPPED_RX_CHAN */
<0x2c>, /* FLOW_CPSW_RX_CHAN */
<0x2e>, /* FLOW_SAUL_RX_0/1_CHAN */
<0x32>, /* FLOW_SAUL_RX_2/3_CHAN */
<0x36>, /* FLOW_ICSSG_0_RX_CHAN */
<0x38>; /* FLOW_ICSSG_1_RX_CHAN */
};
};
};
...@@ -120,7 +120,9 @@ The details of these operations are: ...@@ -120,7 +120,9 @@ The details of these operations are:
.. code-block:: c .. code-block:: c
nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len); struct device *dma_dev = dmaengine_get_dma_device(chan);
nr_sg = dma_map_sg(dma_dev, sgl, sg_len);
if (nr_sg == 0) if (nr_sg == 0)
/* error */ /* error */
......
...@@ -296,6 +296,16 @@ config INTEL_IDXD ...@@ -296,6 +296,16 @@ config INTEL_IDXD
If unsure, say N. If unsure, say N.
# Config symbol that collects all the dependencies that's necessary to
# support shared virtual memory for the devices supported by idxd.
config INTEL_IDXD_SVM
bool "Accelerator Shared Virtual Memory Support"
depends on INTEL_IDXD
depends on INTEL_IOMMU_SVM
depends on PCI_PRI
depends on PCI_PASID
depends on PCI_IOV
config INTEL_IOATDMA config INTEL_IOATDMA
tristate "Intel I/OAT DMA support" tristate "Intel I/OAT DMA support"
depends on PCI && X86_64 depends on PCI && X86_64
......
...@@ -30,7 +30,24 @@ ...@@ -30,7 +30,24 @@
#define AT_XDMAC_FIFO_SZ(i) (((i) >> 5) & 0x7FF) /* Number of Bytes */ #define AT_XDMAC_FIFO_SZ(i) (((i) >> 5) & 0x7FF) /* Number of Bytes */
#define AT_XDMAC_NB_REQ(i) ((((i) >> 16) & 0x3F) + 1) /* Number of Peripheral Requests Minus One */ #define AT_XDMAC_NB_REQ(i) ((((i) >> 16) & 0x3F) + 1) /* Number of Peripheral Requests Minus One */
#define AT_XDMAC_GCFG 0x04 /* Global Configuration Register */ #define AT_XDMAC_GCFG 0x04 /* Global Configuration Register */
#define AT_XDMAC_WRHP(i) (((i) & 0xF) << 4)
#define AT_XDMAC_WRMP(i) (((i) & 0xF) << 8)
#define AT_XDMAC_WRLP(i) (((i) & 0xF) << 12)
#define AT_XDMAC_RDHP(i) (((i) & 0xF) << 16)
#define AT_XDMAC_RDMP(i) (((i) & 0xF) << 20)
#define AT_XDMAC_RDLP(i) (((i) & 0xF) << 24)
#define AT_XDMAC_RDSG(i) (((i) & 0xF) << 28)
#define AT_XDMAC_GCFG_M2M (AT_XDMAC_RDLP(0xF) | AT_XDMAC_WRLP(0xF))
#define AT_XDMAC_GCFG_P2M (AT_XDMAC_RDSG(0x1) | AT_XDMAC_RDHP(0x3) | \
AT_XDMAC_WRHP(0x5))
#define AT_XDMAC_GWAC 0x08 /* Global Weighted Arbiter Configuration Register */ #define AT_XDMAC_GWAC 0x08 /* Global Weighted Arbiter Configuration Register */
#define AT_XDMAC_PW0(i) (((i) & 0xF) << 0)
#define AT_XDMAC_PW1(i) (((i) & 0xF) << 4)
#define AT_XDMAC_PW2(i) (((i) & 0xF) << 8)
#define AT_XDMAC_PW3(i) (((i) & 0xF) << 12)
#define AT_XDMAC_GWAC_M2M 0
#define AT_XDMAC_GWAC_P2M (AT_XDMAC_PW0(0xF) | AT_XDMAC_PW2(0xF))
#define AT_XDMAC_GIE 0x0C /* Global Interrupt Enable Register */ #define AT_XDMAC_GIE 0x0C /* Global Interrupt Enable Register */
#define AT_XDMAC_GID 0x10 /* Global Interrupt Disable Register */ #define AT_XDMAC_GID 0x10 /* Global Interrupt Disable Register */
#define AT_XDMAC_GIM 0x14 /* Global Interrupt Mask Register */ #define AT_XDMAC_GIM 0x14 /* Global Interrupt Mask Register */
...@@ -38,13 +55,6 @@ ...@@ -38,13 +55,6 @@
#define AT_XDMAC_GE 0x1C /* Global Channel Enable Register */ #define AT_XDMAC_GE 0x1C /* Global Channel Enable Register */
#define AT_XDMAC_GD 0x20 /* Global Channel Disable Register */ #define AT_XDMAC_GD 0x20 /* Global Channel Disable Register */
#define AT_XDMAC_GS 0x24 /* Global Channel Status Register */ #define AT_XDMAC_GS 0x24 /* Global Channel Status Register */
#define AT_XDMAC_GRS 0x28 /* Global Channel Read Suspend Register */
#define AT_XDMAC_GWS 0x2C /* Global Write Suspend Register */
#define AT_XDMAC_GRWS 0x30 /* Global Channel Read Write Suspend Register */
#define AT_XDMAC_GRWR 0x34 /* Global Channel Read Write Resume Register */
#define AT_XDMAC_GSWR 0x38 /* Global Channel Software Request Register */
#define AT_XDMAC_GSWS 0x3C /* Global channel Software Request Status Register */
#define AT_XDMAC_GSWF 0x40 /* Global Channel Software Flush Request Register */
#define AT_XDMAC_VERSION 0xFFC /* XDMAC Version Register */ #define AT_XDMAC_VERSION 0xFFC /* XDMAC Version Register */
/* Channel relative registers offsets */ /* Channel relative registers offsets */
...@@ -150,8 +160,6 @@ ...@@ -150,8 +160,6 @@
#define AT_XDMAC_CSUS 0x30 /* Channel Source Microblock Stride */ #define AT_XDMAC_CSUS 0x30 /* Channel Source Microblock Stride */
#define AT_XDMAC_CDUS 0x34 /* Channel Destination Microblock Stride */ #define AT_XDMAC_CDUS 0x34 /* Channel Destination Microblock Stride */
#define AT_XDMAC_CHAN_REG_BASE 0x50 /* Channel registers base address */
/* Microblock control members */ /* Microblock control members */
#define AT_XDMAC_MBR_UBC_UBLEN_MAX 0xFFFFFFUL /* Maximum Microblock Length */ #define AT_XDMAC_MBR_UBC_UBLEN_MAX 0xFFFFFFUL /* Maximum Microblock Length */
#define AT_XDMAC_MBR_UBC_NDE (0x1 << 24) /* Next Descriptor Enable */ #define AT_XDMAC_MBR_UBC_NDE (0x1 << 24) /* Next Descriptor Enable */
...@@ -179,6 +187,29 @@ enum atc_status { ...@@ -179,6 +187,29 @@ enum atc_status {
AT_XDMAC_CHAN_IS_PAUSED, AT_XDMAC_CHAN_IS_PAUSED,
}; };
struct at_xdmac_layout {
/* Global Channel Read Suspend Register */
u8 grs;
/* Global Write Suspend Register */
u8 gws;
/* Global Channel Read Write Suspend Register */
u8 grws;
/* Global Channel Read Write Resume Register */
u8 grwr;
/* Global Channel Software Request Register */
u8 gswr;
/* Global channel Software Request Status Register */
u8 gsws;
/* Global Channel Software Flush Request Register */
u8 gswf;
/* Channel reg base */
u8 chan_cc_reg_base;
/* Source/Destination Interface must be specified or not */
bool sdif;
/* AXI queue priority configuration supported */
bool axi_config;
};
/* ----- Channels ----- */ /* ----- Channels ----- */
struct at_xdmac_chan { struct at_xdmac_chan {
struct dma_chan chan; struct dma_chan chan;
...@@ -212,6 +243,7 @@ struct at_xdmac { ...@@ -212,6 +243,7 @@ struct at_xdmac {
struct clk *clk; struct clk *clk;
u32 save_gim; u32 save_gim;
struct dma_pool *at_xdmac_desc_pool; struct dma_pool *at_xdmac_desc_pool;
const struct at_xdmac_layout *layout;
struct at_xdmac_chan chan[]; struct at_xdmac_chan chan[];
}; };
...@@ -244,9 +276,35 @@ struct at_xdmac_desc { ...@@ -244,9 +276,35 @@ struct at_xdmac_desc {
struct list_head xfer_node; struct list_head xfer_node;
} __aligned(sizeof(u64)); } __aligned(sizeof(u64));
static const struct at_xdmac_layout at_xdmac_sama5d4_layout = {
.grs = 0x28,
.gws = 0x2C,
.grws = 0x30,
.grwr = 0x34,
.gswr = 0x38,
.gsws = 0x3C,
.gswf = 0x40,
.chan_cc_reg_base = 0x50,
.sdif = true,
.axi_config = false,
};
static const struct at_xdmac_layout at_xdmac_sama7g5_layout = {
.grs = 0x30,
.gws = 0x38,
.grws = 0x40,
.grwr = 0x44,
.gswr = 0x48,
.gsws = 0x4C,
.gswf = 0x50,
.chan_cc_reg_base = 0x60,
.sdif = false,
.axi_config = true,
};
static inline void __iomem *at_xdmac_chan_reg_base(struct at_xdmac *atxdmac, unsigned int chan_nb) static inline void __iomem *at_xdmac_chan_reg_base(struct at_xdmac *atxdmac, unsigned int chan_nb)
{ {
return atxdmac->regs + (AT_XDMAC_CHAN_REG_BASE + chan_nb * 0x40); return atxdmac->regs + (atxdmac->layout->chan_cc_reg_base + chan_nb * 0x40);
} }
#define at_xdmac_read(atxdmac, reg) readl_relaxed((atxdmac)->regs + (reg)) #define at_xdmac_read(atxdmac, reg) readl_relaxed((atxdmac)->regs + (reg))
...@@ -345,8 +403,10 @@ static void at_xdmac_start_xfer(struct at_xdmac_chan *atchan, ...@@ -345,8 +403,10 @@ static void at_xdmac_start_xfer(struct at_xdmac_chan *atchan,
first->active_xfer = true; first->active_xfer = true;
/* Tell xdmac where to get the first descriptor. */ /* Tell xdmac where to get the first descriptor. */
reg = AT_XDMAC_CNDA_NDA(first->tx_dma_desc.phys) reg = AT_XDMAC_CNDA_NDA(first->tx_dma_desc.phys);
| AT_XDMAC_CNDA_NDAIF(atchan->memif); if (atxdmac->layout->sdif)
reg |= AT_XDMAC_CNDA_NDAIF(atchan->memif);
at_xdmac_chan_write(atchan, AT_XDMAC_CNDA, reg); at_xdmac_chan_write(atchan, AT_XDMAC_CNDA, reg);
/* /*
...@@ -541,6 +601,7 @@ static int at_xdmac_compute_chan_conf(struct dma_chan *chan, ...@@ -541,6 +601,7 @@ static int at_xdmac_compute_chan_conf(struct dma_chan *chan,
enum dma_transfer_direction direction) enum dma_transfer_direction direction)
{ {
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan); struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac *atxdmac = to_at_xdmac(atchan->chan.device);
int csize, dwidth; int csize, dwidth;
if (direction == DMA_DEV_TO_MEM) { if (direction == DMA_DEV_TO_MEM) {
...@@ -548,12 +609,14 @@ static int at_xdmac_compute_chan_conf(struct dma_chan *chan, ...@@ -548,12 +609,14 @@ static int at_xdmac_compute_chan_conf(struct dma_chan *chan,
AT91_XDMAC_DT_PERID(atchan->perid) AT91_XDMAC_DT_PERID(atchan->perid)
| AT_XDMAC_CC_DAM_INCREMENTED_AM | AT_XDMAC_CC_DAM_INCREMENTED_AM
| AT_XDMAC_CC_SAM_FIXED_AM | AT_XDMAC_CC_SAM_FIXED_AM
| AT_XDMAC_CC_DIF(atchan->memif)
| AT_XDMAC_CC_SIF(atchan->perif)
| AT_XDMAC_CC_SWREQ_HWR_CONNECTED | AT_XDMAC_CC_SWREQ_HWR_CONNECTED
| AT_XDMAC_CC_DSYNC_PER2MEM | AT_XDMAC_CC_DSYNC_PER2MEM
| AT_XDMAC_CC_MBSIZE_SIXTEEN | AT_XDMAC_CC_MBSIZE_SIXTEEN
| AT_XDMAC_CC_TYPE_PER_TRAN; | AT_XDMAC_CC_TYPE_PER_TRAN;
if (atxdmac->layout->sdif)
atchan->cfg |= AT_XDMAC_CC_DIF(atchan->memif) |
AT_XDMAC_CC_SIF(atchan->perif);
csize = ffs(atchan->sconfig.src_maxburst) - 1; csize = ffs(atchan->sconfig.src_maxburst) - 1;
if (csize < 0) { if (csize < 0) {
dev_err(chan2dev(chan), "invalid src maxburst value\n"); dev_err(chan2dev(chan), "invalid src maxburst value\n");
...@@ -571,12 +634,14 @@ static int at_xdmac_compute_chan_conf(struct dma_chan *chan, ...@@ -571,12 +634,14 @@ static int at_xdmac_compute_chan_conf(struct dma_chan *chan,
AT91_XDMAC_DT_PERID(atchan->perid) AT91_XDMAC_DT_PERID(atchan->perid)
| AT_XDMAC_CC_DAM_FIXED_AM | AT_XDMAC_CC_DAM_FIXED_AM
| AT_XDMAC_CC_SAM_INCREMENTED_AM | AT_XDMAC_CC_SAM_INCREMENTED_AM
| AT_XDMAC_CC_DIF(atchan->perif)
| AT_XDMAC_CC_SIF(atchan->memif)
| AT_XDMAC_CC_SWREQ_HWR_CONNECTED | AT_XDMAC_CC_SWREQ_HWR_CONNECTED
| AT_XDMAC_CC_DSYNC_MEM2PER | AT_XDMAC_CC_DSYNC_MEM2PER
| AT_XDMAC_CC_MBSIZE_SIXTEEN | AT_XDMAC_CC_MBSIZE_SIXTEEN
| AT_XDMAC_CC_TYPE_PER_TRAN; | AT_XDMAC_CC_TYPE_PER_TRAN;
if (atxdmac->layout->sdif)
atchan->cfg |= AT_XDMAC_CC_DIF(atchan->perif) |
AT_XDMAC_CC_SIF(atchan->memif);
csize = ffs(atchan->sconfig.dst_maxburst) - 1; csize = ffs(atchan->sconfig.dst_maxburst) - 1;
if (csize < 0) { if (csize < 0) {
dev_err(chan2dev(chan), "invalid src maxburst value\n"); dev_err(chan2dev(chan), "invalid src maxburst value\n");
...@@ -866,10 +931,12 @@ at_xdmac_interleaved_queue_desc(struct dma_chan *chan, ...@@ -866,10 +931,12 @@ at_xdmac_interleaved_queue_desc(struct dma_chan *chan,
* ERRATA: Even if useless for memory transfers, the PERID has to not * ERRATA: Even if useless for memory transfers, the PERID has to not
* match the one of another channel. If not, it could lead to spurious * match the one of another channel. If not, it could lead to spurious
* flag status. * flag status.
* For SAMA7G5x case, the SIF and DIF fields are no longer used.
* Thus, no need to have the SIF/DIF interfaces here.
* For SAMA5D4x and SAMA5D2x the SIF and DIF are already configured as
* zero.
*/ */
u32 chan_cc = AT_XDMAC_CC_PERID(0x3f) u32 chan_cc = AT_XDMAC_CC_PERID(0x7f)
| AT_XDMAC_CC_DIF(0)
| AT_XDMAC_CC_SIF(0)
| AT_XDMAC_CC_MBSIZE_SIXTEEN | AT_XDMAC_CC_MBSIZE_SIXTEEN
| AT_XDMAC_CC_TYPE_MEM_TRAN; | AT_XDMAC_CC_TYPE_MEM_TRAN;
...@@ -1048,12 +1115,14 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, ...@@ -1048,12 +1115,14 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
* ERRATA: Even if useless for memory transfers, the PERID has to not * ERRATA: Even if useless for memory transfers, the PERID has to not
* match the one of another channel. If not, it could lead to spurious * match the one of another channel. If not, it could lead to spurious
* flag status. * flag status.
* For SAMA7G5x case, the SIF and DIF fields are no longer used.
* Thus, no need to have the SIF/DIF interfaces here.
* For SAMA5D4x and SAMA5D2x the SIF and DIF are already configured as
* zero.
*/ */
u32 chan_cc = AT_XDMAC_CC_PERID(0x3f) u32 chan_cc = AT_XDMAC_CC_PERID(0x7f)
| AT_XDMAC_CC_DAM_INCREMENTED_AM | AT_XDMAC_CC_DAM_INCREMENTED_AM
| AT_XDMAC_CC_SAM_INCREMENTED_AM | AT_XDMAC_CC_SAM_INCREMENTED_AM
| AT_XDMAC_CC_DIF(0)
| AT_XDMAC_CC_SIF(0)
| AT_XDMAC_CC_MBSIZE_SIXTEEN | AT_XDMAC_CC_MBSIZE_SIXTEEN
| AT_XDMAC_CC_TYPE_MEM_TRAN; | AT_XDMAC_CC_TYPE_MEM_TRAN;
unsigned long irqflags; unsigned long irqflags;
...@@ -1154,12 +1223,14 @@ static struct at_xdmac_desc *at_xdmac_memset_create_desc(struct dma_chan *chan, ...@@ -1154,12 +1223,14 @@ static struct at_xdmac_desc *at_xdmac_memset_create_desc(struct dma_chan *chan,
* ERRATA: Even if useless for memory transfers, the PERID has to not * ERRATA: Even if useless for memory transfers, the PERID has to not
* match the one of another channel. If not, it could lead to spurious * match the one of another channel. If not, it could lead to spurious
* flag status. * flag status.
* For SAMA7G5x case, the SIF and DIF fields are no longer used.
* Thus, no need to have the SIF/DIF interfaces here.
* For SAMA5D4x and SAMA5D2x the SIF and DIF are already configured as
* zero.
*/ */
u32 chan_cc = AT_XDMAC_CC_PERID(0x3f) u32 chan_cc = AT_XDMAC_CC_PERID(0x7f)
| AT_XDMAC_CC_DAM_UBS_AM | AT_XDMAC_CC_DAM_UBS_AM
| AT_XDMAC_CC_SAM_INCREMENTED_AM | AT_XDMAC_CC_SAM_INCREMENTED_AM
| AT_XDMAC_CC_DIF(0)
| AT_XDMAC_CC_SIF(0)
| AT_XDMAC_CC_MBSIZE_SIXTEEN | AT_XDMAC_CC_MBSIZE_SIXTEEN
| AT_XDMAC_CC_MEMSET_HW_MODE | AT_XDMAC_CC_MEMSET_HW_MODE
| AT_XDMAC_CC_TYPE_MEM_TRAN; | AT_XDMAC_CC_TYPE_MEM_TRAN;
...@@ -1438,7 +1509,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie, ...@@ -1438,7 +1509,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
mask = AT_XDMAC_CC_TYPE | AT_XDMAC_CC_DSYNC; mask = AT_XDMAC_CC_TYPE | AT_XDMAC_CC_DSYNC;
value = AT_XDMAC_CC_TYPE_PER_TRAN | AT_XDMAC_CC_DSYNC_PER2MEM; value = AT_XDMAC_CC_TYPE_PER_TRAN | AT_XDMAC_CC_DSYNC_PER2MEM;
if ((desc->lld.mbr_cfg & mask) == value) { if ((desc->lld.mbr_cfg & mask) == value) {
at_xdmac_write(atxdmac, AT_XDMAC_GSWF, atchan->mask); at_xdmac_write(atxdmac, atxdmac->layout->gswf, atchan->mask);
while (!(at_xdmac_chan_read(atchan, AT_XDMAC_CIS) & AT_XDMAC_CIS_FIS)) while (!(at_xdmac_chan_read(atchan, AT_XDMAC_CIS) & AT_XDMAC_CIS_FIS))
cpu_relax(); cpu_relax();
} }
...@@ -1496,7 +1567,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie, ...@@ -1496,7 +1567,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
* FIFO flush ensures that data are really written. * FIFO flush ensures that data are really written.
*/ */
if ((desc->lld.mbr_cfg & mask) == value) { if ((desc->lld.mbr_cfg & mask) == value) {
at_xdmac_write(atxdmac, AT_XDMAC_GSWF, atchan->mask); at_xdmac_write(atxdmac, atxdmac->layout->gswf, atchan->mask);
while (!(at_xdmac_chan_read(atchan, AT_XDMAC_CIS) & AT_XDMAC_CIS_FIS)) while (!(at_xdmac_chan_read(atchan, AT_XDMAC_CIS) & AT_XDMAC_CIS_FIS))
cpu_relax(); cpu_relax();
} }
...@@ -1761,7 +1832,7 @@ static int at_xdmac_device_pause(struct dma_chan *chan) ...@@ -1761,7 +1832,7 @@ static int at_xdmac_device_pause(struct dma_chan *chan)
return 0; return 0;
spin_lock_irqsave(&atchan->lock, flags); spin_lock_irqsave(&atchan->lock, flags);
at_xdmac_write(atxdmac, AT_XDMAC_GRWS, atchan->mask); at_xdmac_write(atxdmac, atxdmac->layout->grws, atchan->mask);
while (at_xdmac_chan_read(atchan, AT_XDMAC_CC) while (at_xdmac_chan_read(atchan, AT_XDMAC_CC)
& (AT_XDMAC_CC_WRIP | AT_XDMAC_CC_RDIP)) & (AT_XDMAC_CC_WRIP | AT_XDMAC_CC_RDIP))
cpu_relax(); cpu_relax();
...@@ -1784,7 +1855,7 @@ static int at_xdmac_device_resume(struct dma_chan *chan) ...@@ -1784,7 +1855,7 @@ static int at_xdmac_device_resume(struct dma_chan *chan)
return 0; return 0;
} }
at_xdmac_write(atxdmac, AT_XDMAC_GRWR, atchan->mask); at_xdmac_write(atxdmac, atxdmac->layout->grwr, atchan->mask);
clear_bit(AT_XDMAC_CHAN_IS_PAUSED, &atchan->status); clear_bit(AT_XDMAC_CHAN_IS_PAUSED, &atchan->status);
spin_unlock_irqrestore(&atchan->lock, flags); spin_unlock_irqrestore(&atchan->lock, flags);
...@@ -1947,6 +2018,30 @@ static int atmel_xdmac_resume(struct device *dev) ...@@ -1947,6 +2018,30 @@ static int atmel_xdmac_resume(struct device *dev)
} }
#endif /* CONFIG_PM_SLEEP */ #endif /* CONFIG_PM_SLEEP */
static void at_xdmac_axi_config(struct platform_device *pdev)
{
struct at_xdmac *atxdmac = (struct at_xdmac *)platform_get_drvdata(pdev);
bool dev_m2m = false;
u32 dma_requests;
if (!atxdmac->layout->axi_config)
return; /* Not supported */
if (!of_property_read_u32(pdev->dev.of_node, "dma-requests",
&dma_requests)) {
dev_info(&pdev->dev, "controller in mem2mem mode.\n");
dev_m2m = true;
}
if (dev_m2m) {
at_xdmac_write(atxdmac, AT_XDMAC_GCFG, AT_XDMAC_GCFG_M2M);
at_xdmac_write(atxdmac, AT_XDMAC_GWAC, AT_XDMAC_GWAC_M2M);
} else {
at_xdmac_write(atxdmac, AT_XDMAC_GCFG, AT_XDMAC_GCFG_P2M);
at_xdmac_write(atxdmac, AT_XDMAC_GWAC, AT_XDMAC_GWAC_P2M);
}
}
static int at_xdmac_probe(struct platform_device *pdev) static int at_xdmac_probe(struct platform_device *pdev)
{ {
struct at_xdmac *atxdmac; struct at_xdmac *atxdmac;
...@@ -1986,6 +2081,10 @@ static int at_xdmac_probe(struct platform_device *pdev) ...@@ -1986,6 +2081,10 @@ static int at_xdmac_probe(struct platform_device *pdev)
atxdmac->regs = base; atxdmac->regs = base;
atxdmac->irq = irq; atxdmac->irq = irq;
atxdmac->layout = of_device_get_match_data(&pdev->dev);
if (!atxdmac->layout)
return -ENODEV;
atxdmac->clk = devm_clk_get(&pdev->dev, "dma_clk"); atxdmac->clk = devm_clk_get(&pdev->dev, "dma_clk");
if (IS_ERR(atxdmac->clk)) { if (IS_ERR(atxdmac->clk)) {
dev_err(&pdev->dev, "can't get dma_clk\n"); dev_err(&pdev->dev, "can't get dma_clk\n");
...@@ -2087,6 +2186,8 @@ static int at_xdmac_probe(struct platform_device *pdev) ...@@ -2087,6 +2186,8 @@ static int at_xdmac_probe(struct platform_device *pdev)
dev_info(&pdev->dev, "%d channels, mapped at 0x%p\n", dev_info(&pdev->dev, "%d channels, mapped at 0x%p\n",
nr_channels, atxdmac->regs); nr_channels, atxdmac->regs);
at_xdmac_axi_config(pdev);
return 0; return 0;
err_dma_unregister: err_dma_unregister:
...@@ -2128,6 +2229,10 @@ static const struct dev_pm_ops atmel_xdmac_dev_pm_ops = { ...@@ -2128,6 +2229,10 @@ static const struct dev_pm_ops atmel_xdmac_dev_pm_ops = {
static const struct of_device_id atmel_xdmac_dt_ids[] = { static const struct of_device_id atmel_xdmac_dt_ids[] = {
{ {
.compatible = "atmel,sama5d4-dma", .compatible = "atmel,sama5d4-dma",
.data = &at_xdmac_sama5d4_layout,
}, {
.compatible = "microchip,sama7g5-dma",
.data = &at_xdmac_sama7g5_layout,
}, { }, {
/* sentinel */ /* sentinel */
} }
......
...@@ -1044,7 +1044,7 @@ static struct platform_driver jz4780_dma_driver = { ...@@ -1044,7 +1044,7 @@ static struct platform_driver jz4780_dma_driver = {
.remove = jz4780_dma_remove, .remove = jz4780_dma_remove,
.driver = { .driver = {
.name = "jz4780-dma", .name = "jz4780-dma",
.of_match_table = of_match_ptr(jz4780_dma_dt_match), .of_match_table = jz4780_dma_dt_match,
}, },
}; };
......
...@@ -573,6 +573,7 @@ static int dmatest_func(void *data) ...@@ -573,6 +573,7 @@ static int dmatest_func(void *data)
struct dmatest_params *params; struct dmatest_params *params;
struct dma_chan *chan; struct dma_chan *chan;
struct dma_device *dev; struct dma_device *dev;
struct device *dma_dev;
unsigned int error_count; unsigned int error_count;
unsigned int failed_tests = 0; unsigned int failed_tests = 0;
unsigned int total_tests = 0; unsigned int total_tests = 0;
...@@ -606,6 +607,8 @@ static int dmatest_func(void *data) ...@@ -606,6 +607,8 @@ static int dmatest_func(void *data)
params = &info->params; params = &info->params;
chan = thread->chan; chan = thread->chan;
dev = chan->device; dev = chan->device;
dma_dev = dmaengine_get_dma_device(chan);
src = &thread->src; src = &thread->src;
dst = &thread->dst; dst = &thread->dst;
if (thread->type == DMA_MEMCPY) { if (thread->type == DMA_MEMCPY) {
...@@ -730,7 +733,7 @@ static int dmatest_func(void *data) ...@@ -730,7 +733,7 @@ static int dmatest_func(void *data)
filltime = ktime_add(filltime, diff); filltime = ktime_add(filltime, diff);
} }
um = dmaengine_get_unmap_data(dev->dev, src->cnt + dst->cnt, um = dmaengine_get_unmap_data(dma_dev, src->cnt + dst->cnt,
GFP_KERNEL); GFP_KERNEL);
if (!um) { if (!um) {
failed_tests++; failed_tests++;
...@@ -745,10 +748,10 @@ static int dmatest_func(void *data) ...@@ -745,10 +748,10 @@ static int dmatest_func(void *data)
struct page *pg = virt_to_page(buf); struct page *pg = virt_to_page(buf);
unsigned long pg_off = offset_in_page(buf); unsigned long pg_off = offset_in_page(buf);
um->addr[i] = dma_map_page(dev->dev, pg, pg_off, um->addr[i] = dma_map_page(dma_dev, pg, pg_off,
um->len, DMA_TO_DEVICE); um->len, DMA_TO_DEVICE);
srcs[i] = um->addr[i] + src->off; srcs[i] = um->addr[i] + src->off;
ret = dma_mapping_error(dev->dev, um->addr[i]); ret = dma_mapping_error(dma_dev, um->addr[i]);
if (ret) { if (ret) {
result("src mapping error", total_tests, result("src mapping error", total_tests,
src->off, dst->off, len, ret); src->off, dst->off, len, ret);
...@@ -763,9 +766,9 @@ static int dmatest_func(void *data) ...@@ -763,9 +766,9 @@ static int dmatest_func(void *data)
struct page *pg = virt_to_page(buf); struct page *pg = virt_to_page(buf);
unsigned long pg_off = offset_in_page(buf); unsigned long pg_off = offset_in_page(buf);
dsts[i] = dma_map_page(dev->dev, pg, pg_off, um->len, dsts[i] = dma_map_page(dma_dev, pg, pg_off, um->len,
DMA_BIDIRECTIONAL); DMA_BIDIRECTIONAL);
ret = dma_mapping_error(dev->dev, dsts[i]); ret = dma_mapping_error(dma_dev, dsts[i]);
if (ret) { if (ret) {
result("dst mapping error", total_tests, result("dst mapping error", total_tests,
src->off, dst->off, len, ret); src->off, dst->off, len, ret);
......
...@@ -992,7 +992,7 @@ static struct platform_driver dw_driver = { ...@@ -992,7 +992,7 @@ static struct platform_driver dw_driver = {
.remove = dw_remove, .remove = dw_remove,
.driver = { .driver = {
.name = KBUILD_MODNAME, .name = KBUILD_MODNAME,
.of_match_table = of_match_ptr(dw_dma_of_id_table), .of_match_table = dw_dma_of_id_table,
.pm = &dw_axi_dma_pm_ops, .pm = &dw_axi_dma_pm_ops,
}, },
}; };
......
...@@ -982,8 +982,11 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan) ...@@ -982,8 +982,11 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan)
dev_vdbg(chan2dev(chan), "%s\n", __func__); dev_vdbg(chan2dev(chan), "%s\n", __func__);
pm_runtime_get_sync(dw->dma.dev);
/* ASSERT: channel is idle */ /* ASSERT: channel is idle */
if (dma_readl(dw, CH_EN) & dwc->mask) { if (dma_readl(dw, CH_EN) & dwc->mask) {
pm_runtime_put_sync_suspend(dw->dma.dev);
dev_dbg(chan2dev(chan), "DMA channel not idle?\n"); dev_dbg(chan2dev(chan), "DMA channel not idle?\n");
return -EIO; return -EIO;
} }
...@@ -1000,6 +1003,7 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan) ...@@ -1000,6 +1003,7 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan)
* We need controller-specific data to set up slave transfers. * We need controller-specific data to set up slave transfers.
*/ */
if (chan->private && !dw_dma_filter(chan, chan->private)) { if (chan->private && !dw_dma_filter(chan, chan->private)) {
pm_runtime_put_sync_suspend(dw->dma.dev);
dev_warn(chan2dev(chan), "Wrong controller-specific data\n"); dev_warn(chan2dev(chan), "Wrong controller-specific data\n");
return -EINVAL; return -EINVAL;
} }
...@@ -1043,6 +1047,8 @@ static void dwc_free_chan_resources(struct dma_chan *chan) ...@@ -1043,6 +1047,8 @@ static void dwc_free_chan_resources(struct dma_chan *chan)
if (!dw->in_use) if (!dw->in_use)
do_dw_dma_off(dw); do_dw_dma_off(dw);
pm_runtime_put_sync_suspend(dw->dma.dev);
dev_vdbg(chan2dev(chan), "%s: done\n", __func__); dev_vdbg(chan2dev(chan), "%s: done\n", __func__);
} }
......
...@@ -431,9 +431,8 @@ static irqreturn_t hisi_dma_irq(int irq, void *data) ...@@ -431,9 +431,8 @@ static irqreturn_t hisi_dma_irq(int irq, void *data)
struct hisi_dma_dev *hdma_dev = chan->hdma_dev; struct hisi_dma_dev *hdma_dev = chan->hdma_dev;
struct hisi_dma_desc *desc; struct hisi_dma_desc *desc;
struct hisi_dma_cqe *cqe; struct hisi_dma_cqe *cqe;
unsigned long flags;
spin_lock_irqsave(&chan->vc.lock, flags); spin_lock(&chan->vc.lock);
desc = chan->desc; desc = chan->desc;
cqe = chan->cq + chan->cq_head; cqe = chan->cq + chan->cq_head;
...@@ -452,7 +451,7 @@ static irqreturn_t hisi_dma_irq(int irq, void *data) ...@@ -452,7 +451,7 @@ static irqreturn_t hisi_dma_irq(int irq, void *data)
chan->desc = NULL; chan->desc = NULL;
} }
spin_unlock_irqrestore(&chan->vc.lock, flags); spin_unlock(&chan->vc.lock);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
......
...@@ -667,9 +667,7 @@ static int idma64_platform_remove(struct platform_device *pdev) ...@@ -667,9 +667,7 @@ static int idma64_platform_remove(struct platform_device *pdev)
return idma64_remove(chip); return idma64_remove(chip);
} }
#ifdef CONFIG_PM_SLEEP static int __maybe_unused idma64_pm_suspend(struct device *dev)
static int idma64_pm_suspend(struct device *dev)
{ {
struct idma64_chip *chip = dev_get_drvdata(dev); struct idma64_chip *chip = dev_get_drvdata(dev);
...@@ -677,7 +675,7 @@ static int idma64_pm_suspend(struct device *dev) ...@@ -677,7 +675,7 @@ static int idma64_pm_suspend(struct device *dev)
return 0; return 0;
} }
static int idma64_pm_resume(struct device *dev) static int __maybe_unused idma64_pm_resume(struct device *dev)
{ {
struct idma64_chip *chip = dev_get_drvdata(dev); struct idma64_chip *chip = dev_get_drvdata(dev);
...@@ -685,8 +683,6 @@ static int idma64_pm_resume(struct device *dev) ...@@ -685,8 +683,6 @@ static int idma64_pm_resume(struct device *dev)
return 0; return 0;
} }
#endif /* CONFIG_PM_SLEEP */
static const struct dev_pm_ops idma64_dev_pm_ops = { static const struct dev_pm_ops idma64_dev_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(idma64_pm_suspend, idma64_pm_resume) SET_SYSTEM_SLEEP_PM_OPS(idma64_pm_suspend, idma64_pm_resume)
}; };
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#include <linux/cdev.h> #include <linux/cdev.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/poll.h> #include <linux/poll.h>
#include <linux/iommu.h>
#include <uapi/linux/idxd.h> #include <uapi/linux/idxd.h>
#include "registers.h" #include "registers.h"
#include "idxd.h" #include "idxd.h"
...@@ -27,12 +28,15 @@ struct idxd_cdev_context { ...@@ -27,12 +28,15 @@ struct idxd_cdev_context {
*/ */
static struct idxd_cdev_context ictx[IDXD_TYPE_MAX] = { static struct idxd_cdev_context ictx[IDXD_TYPE_MAX] = {
{ .name = "dsa" }, { .name = "dsa" },
{ .name = "iax" }
}; };
struct idxd_user_context { struct idxd_user_context {
struct idxd_wq *wq; struct idxd_wq *wq;
struct task_struct *task; struct task_struct *task;
unsigned int pasid;
unsigned int flags; unsigned int flags;
struct iommu_sva *sva;
}; };
enum idxd_cdev_cleanup { enum idxd_cdev_cleanup {
...@@ -75,6 +79,8 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp) ...@@ -75,6 +79,8 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp)
struct idxd_wq *wq; struct idxd_wq *wq;
struct device *dev; struct device *dev;
int rc = 0; int rc = 0;
struct iommu_sva *sva;
unsigned int pasid;
wq = inode_wq(inode); wq = inode_wq(inode);
idxd = wq->idxd; idxd = wq->idxd;
...@@ -95,6 +101,34 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp) ...@@ -95,6 +101,34 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp)
ctx->wq = wq; ctx->wq = wq;
filp->private_data = ctx; filp->private_data = ctx;
if (device_pasid_enabled(idxd)) {
sva = iommu_sva_bind_device(dev, current->mm, NULL);
if (IS_ERR(sva)) {
rc = PTR_ERR(sva);
dev_err(dev, "pasid allocation failed: %d\n", rc);
goto failed;
}
pasid = iommu_sva_get_pasid(sva);
if (pasid == IOMMU_PASID_INVALID) {
iommu_sva_unbind_device(sva);
goto failed;
}
ctx->sva = sva;
ctx->pasid = pasid;
if (wq_dedicated(wq)) {
rc = idxd_wq_set_pasid(wq, pasid);
if (rc < 0) {
iommu_sva_unbind_device(sva);
dev_err(dev, "wq set pasid failed: %d\n", rc);
goto failed;
}
}
}
idxd_wq_get(wq); idxd_wq_get(wq);
mutex_unlock(&wq->wq_lock); mutex_unlock(&wq->wq_lock);
return 0; return 0;
...@@ -111,13 +145,27 @@ static int idxd_cdev_release(struct inode *node, struct file *filep) ...@@ -111,13 +145,27 @@ static int idxd_cdev_release(struct inode *node, struct file *filep)
struct idxd_wq *wq = ctx->wq; struct idxd_wq *wq = ctx->wq;
struct idxd_device *idxd = wq->idxd; struct idxd_device *idxd = wq->idxd;
struct device *dev = &idxd->pdev->dev; struct device *dev = &idxd->pdev->dev;
int rc;
dev_dbg(dev, "%s called\n", __func__); dev_dbg(dev, "%s called\n", __func__);
filep->private_data = NULL; filep->private_data = NULL;
/* Wait for in-flight operations to complete. */ /* Wait for in-flight operations to complete. */
idxd_wq_drain(wq); if (wq_shared(wq)) {
idxd_device_drain_pasid(idxd, ctx->pasid);
} else {
if (device_pasid_enabled(idxd)) {
/* The wq disable in the disable pasid function will drain the wq */
rc = idxd_wq_disable_pasid(wq);
if (rc < 0)
dev_err(dev, "wq disable pasid failed.\n");
} else {
idxd_wq_drain(wq);
}
}
if (ctx->sva)
iommu_sva_unbind_device(ctx->sva);
kfree(ctx); kfree(ctx);
mutex_lock(&wq->wq_lock); mutex_lock(&wq->wq_lock);
idxd_wq_put(wq); idxd_wq_put(wq);
......
...@@ -131,6 +131,8 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq) ...@@ -131,6 +131,8 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq)
struct idxd_device *idxd = wq->idxd; struct idxd_device *idxd = wq->idxd;
struct device *dev = &idxd->pdev->dev; struct device *dev = &idxd->pdev->dev;
int rc, num_descs, i; int rc, num_descs, i;
int align;
u64 tmp;
if (wq->type != IDXD_WQT_KERNEL) if (wq->type != IDXD_WQT_KERNEL)
return 0; return 0;
...@@ -142,14 +144,27 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq) ...@@ -142,14 +144,27 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq)
if (rc < 0) if (rc < 0)
return rc; return rc;
wq->compls_size = num_descs * sizeof(struct dsa_completion_record); if (idxd->type == IDXD_TYPE_DSA)
wq->compls = dma_alloc_coherent(dev, wq->compls_size, align = 32;
&wq->compls_addr, GFP_KERNEL); else if (idxd->type == IDXD_TYPE_IAX)
if (!wq->compls) { align = 64;
else
return -ENODEV;
wq->compls_size = num_descs * idxd->compl_size + align;
wq->compls_raw = dma_alloc_coherent(dev, wq->compls_size,
&wq->compls_addr_raw, GFP_KERNEL);
if (!wq->compls_raw) {
rc = -ENOMEM; rc = -ENOMEM;
goto fail_alloc_compls; goto fail_alloc_compls;
} }
/* Adjust alignment */
wq->compls_addr = (wq->compls_addr_raw + (align - 1)) & ~(align - 1);
tmp = (u64)wq->compls_raw;
tmp = (tmp + (align - 1)) & ~(align - 1);
wq->compls = (struct dsa_completion_record *)tmp;
rc = alloc_descs(wq, num_descs); rc = alloc_descs(wq, num_descs);
if (rc < 0) if (rc < 0)
goto fail_alloc_descs; goto fail_alloc_descs;
...@@ -163,9 +178,11 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq) ...@@ -163,9 +178,11 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq)
struct idxd_desc *desc = wq->descs[i]; struct idxd_desc *desc = wq->descs[i];
desc->hw = wq->hw_descs[i]; desc->hw = wq->hw_descs[i];
desc->completion = &wq->compls[i]; if (idxd->type == IDXD_TYPE_DSA)
desc->compl_dma = wq->compls_addr + desc->completion = &wq->compls[i];
sizeof(struct dsa_completion_record) * i; else if (idxd->type == IDXD_TYPE_IAX)
desc->iax_completion = &wq->iax_compls[i];
desc->compl_dma = wq->compls_addr + idxd->compl_size * i;
desc->id = i; desc->id = i;
desc->wq = wq; desc->wq = wq;
desc->cpu = -1; desc->cpu = -1;
...@@ -178,7 +195,8 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq) ...@@ -178,7 +195,8 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq)
fail_sbitmap_init: fail_sbitmap_init:
free_descs(wq); free_descs(wq);
fail_alloc_descs: fail_alloc_descs:
dma_free_coherent(dev, wq->compls_size, wq->compls, wq->compls_addr); dma_free_coherent(dev, wq->compls_size, wq->compls_raw,
wq->compls_addr_raw);
fail_alloc_compls: fail_alloc_compls:
free_hw_descs(wq); free_hw_descs(wq);
return rc; return rc;
...@@ -193,7 +211,8 @@ void idxd_wq_free_resources(struct idxd_wq *wq) ...@@ -193,7 +211,8 @@ void idxd_wq_free_resources(struct idxd_wq *wq)
free_hw_descs(wq); free_hw_descs(wq);
free_descs(wq); free_descs(wq);
dma_free_coherent(dev, wq->compls_size, wq->compls, wq->compls_addr); dma_free_coherent(dev, wq->compls_size, wq->compls_raw,
wq->compls_addr_raw);
sbitmap_queue_free(&wq->sbq); sbitmap_queue_free(&wq->sbq);
} }
...@@ -273,10 +292,9 @@ int idxd_wq_map_portal(struct idxd_wq *wq) ...@@ -273,10 +292,9 @@ int idxd_wq_map_portal(struct idxd_wq *wq)
start = pci_resource_start(pdev, IDXD_WQ_BAR); start = pci_resource_start(pdev, IDXD_WQ_BAR);
start += idxd_get_wq_portal_full_offset(wq->id, IDXD_PORTAL_LIMITED); start += idxd_get_wq_portal_full_offset(wq->id, IDXD_PORTAL_LIMITED);
wq->dportal = devm_ioremap(dev, start, IDXD_PORTAL_SIZE); wq->portal = devm_ioremap(dev, start, IDXD_PORTAL_SIZE);
if (!wq->dportal) if (!wq->portal)
return -ENOMEM; return -ENOMEM;
dev_dbg(dev, "wq %d portal mapped at %p\n", wq->id, wq->dportal);
return 0; return 0;
} }
...@@ -285,7 +303,61 @@ void idxd_wq_unmap_portal(struct idxd_wq *wq) ...@@ -285,7 +303,61 @@ void idxd_wq_unmap_portal(struct idxd_wq *wq)
{ {
struct device *dev = &wq->idxd->pdev->dev; struct device *dev = &wq->idxd->pdev->dev;
devm_iounmap(dev, wq->dportal); devm_iounmap(dev, wq->portal);
}
int idxd_wq_set_pasid(struct idxd_wq *wq, int pasid)
{
struct idxd_device *idxd = wq->idxd;
int rc;
union wqcfg wqcfg;
unsigned int offset;
unsigned long flags;
rc = idxd_wq_disable(wq);
if (rc < 0)
return rc;
offset = WQCFG_OFFSET(idxd, wq->id, WQCFG_PASID_IDX);
spin_lock_irqsave(&idxd->dev_lock, flags);
wqcfg.bits[WQCFG_PASID_IDX] = ioread32(idxd->reg_base + offset);
wqcfg.pasid_en = 1;
wqcfg.pasid = pasid;
iowrite32(wqcfg.bits[WQCFG_PASID_IDX], idxd->reg_base + offset);
spin_unlock_irqrestore(&idxd->dev_lock, flags);
rc = idxd_wq_enable(wq);
if (rc < 0)
return rc;
return 0;
}
int idxd_wq_disable_pasid(struct idxd_wq *wq)
{
struct idxd_device *idxd = wq->idxd;
int rc;
union wqcfg wqcfg;
unsigned int offset;
unsigned long flags;
rc = idxd_wq_disable(wq);
if (rc < 0)
return rc;
offset = WQCFG_OFFSET(idxd, wq->id, WQCFG_PASID_IDX);
spin_lock_irqsave(&idxd->dev_lock, flags);
wqcfg.bits[WQCFG_PASID_IDX] = ioread32(idxd->reg_base + offset);
wqcfg.pasid_en = 0;
wqcfg.pasid = 0;
iowrite32(wqcfg.bits[WQCFG_PASID_IDX], idxd->reg_base + offset);
spin_unlock_irqrestore(&idxd->dev_lock, flags);
rc = idxd_wq_enable(wq);
if (rc < 0)
return rc;
return 0;
} }
void idxd_wq_disable_cleanup(struct idxd_wq *wq) void idxd_wq_disable_cleanup(struct idxd_wq *wq)
...@@ -301,6 +373,7 @@ void idxd_wq_disable_cleanup(struct idxd_wq *wq) ...@@ -301,6 +373,7 @@ void idxd_wq_disable_cleanup(struct idxd_wq *wq)
wq->group = NULL; wq->group = NULL;
wq->threshold = 0; wq->threshold = 0;
wq->priority = 0; wq->priority = 0;
wq->ats_dis = 0;
clear_bit(WQ_FLAG_DEDICATED, &wq->flags); clear_bit(WQ_FLAG_DEDICATED, &wq->flags);
memset(wq->name, 0, WQ_NAME_SIZE); memset(wq->name, 0, WQ_NAME_SIZE);
...@@ -468,6 +541,17 @@ void idxd_device_reset(struct idxd_device *idxd) ...@@ -468,6 +541,17 @@ void idxd_device_reset(struct idxd_device *idxd)
spin_unlock_irqrestore(&idxd->dev_lock, flags); spin_unlock_irqrestore(&idxd->dev_lock, flags);
} }
void idxd_device_drain_pasid(struct idxd_device *idxd, int pasid)
{
struct device *dev = &idxd->pdev->dev;
u32 operand;
operand = pasid;
dev_dbg(dev, "cmd: %u operand: %#x\n", IDXD_CMD_DRAIN_PASID, operand);
idxd_cmd_exec(idxd, IDXD_CMD_DRAIN_PASID, operand, NULL);
dev_dbg(dev, "pasid %d drained\n", pasid);
}
/* Device configuration bits */ /* Device configuration bits */
static void idxd_group_config_write(struct idxd_group *group) static void idxd_group_config_write(struct idxd_group *group)
{ {
...@@ -479,24 +563,22 @@ static void idxd_group_config_write(struct idxd_group *group) ...@@ -479,24 +563,22 @@ static void idxd_group_config_write(struct idxd_group *group)
dev_dbg(dev, "Writing group %d cfg registers\n", group->id); dev_dbg(dev, "Writing group %d cfg registers\n", group->id);
/* setup GRPWQCFG */ /* setup GRPWQCFG */
for (i = 0; i < 4; i++) { for (i = 0; i < GRPWQCFG_STRIDES; i++) {
grpcfg_offset = idxd->grpcfg_offset + grpcfg_offset = GRPWQCFG_OFFSET(idxd, group->id, i);
group->id * 64 + i * sizeof(u64); iowrite64(group->grpcfg.wqs[i], idxd->reg_base + grpcfg_offset);
iowrite64(group->grpcfg.wqs[i],
idxd->reg_base + grpcfg_offset);
dev_dbg(dev, "GRPCFG wq[%d:%d: %#x]: %#llx\n", dev_dbg(dev, "GRPCFG wq[%d:%d: %#x]: %#llx\n",
group->id, i, grpcfg_offset, group->id, i, grpcfg_offset,
ioread64(idxd->reg_base + grpcfg_offset)); ioread64(idxd->reg_base + grpcfg_offset));
} }
/* setup GRPENGCFG */ /* setup GRPENGCFG */
grpcfg_offset = idxd->grpcfg_offset + group->id * 64 + 32; grpcfg_offset = GRPENGCFG_OFFSET(idxd, group->id);
iowrite64(group->grpcfg.engines, idxd->reg_base + grpcfg_offset); iowrite64(group->grpcfg.engines, idxd->reg_base + grpcfg_offset);
dev_dbg(dev, "GRPCFG engs[%d: %#x]: %#llx\n", group->id, dev_dbg(dev, "GRPCFG engs[%d: %#x]: %#llx\n", group->id,
grpcfg_offset, ioread64(idxd->reg_base + grpcfg_offset)); grpcfg_offset, ioread64(idxd->reg_base + grpcfg_offset));
/* setup GRPFLAGS */ /* setup GRPFLAGS */
grpcfg_offset = idxd->grpcfg_offset + group->id * 64 + 40; grpcfg_offset = GRPFLGCFG_OFFSET(idxd, group->id);
iowrite32(group->grpcfg.flags.bits, idxd->reg_base + grpcfg_offset); iowrite32(group->grpcfg.flags.bits, idxd->reg_base + grpcfg_offset);
dev_dbg(dev, "GRPFLAGS flags[%d: %#x]: %#x\n", dev_dbg(dev, "GRPFLAGS flags[%d: %#x]: %#x\n",
group->id, grpcfg_offset, group->id, grpcfg_offset,
...@@ -554,9 +636,24 @@ static int idxd_wq_config_write(struct idxd_wq *wq) ...@@ -554,9 +636,24 @@ static int idxd_wq_config_write(struct idxd_wq *wq)
/* byte 8-11 */ /* byte 8-11 */
wq->wqcfg->priv = !!(wq->type == IDXD_WQT_KERNEL); wq->wqcfg->priv = !!(wq->type == IDXD_WQT_KERNEL);
wq->wqcfg->mode = 1; if (wq_dedicated(wq))
wq->wqcfg->mode = 1;
if (device_pasid_enabled(idxd)) {
wq->wqcfg->pasid_en = 1;
if (wq->type == IDXD_WQT_KERNEL && wq_dedicated(wq))
wq->wqcfg->pasid = idxd->pasid;
}
wq->wqcfg->priority = wq->priority; wq->wqcfg->priority = wq->priority;
if (idxd->hw.gen_cap.block_on_fault &&
test_bit(WQ_FLAG_BLOCK_ON_FAULT, &wq->flags))
wq->wqcfg->bof = 1;
if (idxd->hw.wq_cap.wq_ats_support)
wq->wqcfg->wq_ats_disable = wq->ats_dis;
/* bytes 12-15 */ /* bytes 12-15 */
wq->wqcfg->max_xfer_shift = ilog2(wq->max_xfer_bytes); wq->wqcfg->max_xfer_shift = ilog2(wq->max_xfer_bytes);
wq->wqcfg->max_batch_shift = ilog2(wq->max_batch_size); wq->wqcfg->max_batch_shift = ilog2(wq->max_batch_size);
...@@ -664,8 +761,8 @@ static int idxd_wqs_setup(struct idxd_device *idxd) ...@@ -664,8 +761,8 @@ static int idxd_wqs_setup(struct idxd_device *idxd)
if (!wq->size) if (!wq->size)
continue; continue;
if (!wq_dedicated(wq)) { if (wq_shared(wq) && !device_swq_supported(idxd)) {
dev_warn(dev, "No shared workqueue support.\n"); dev_warn(dev, "No shared wq support but configured.\n");
return -EINVAL; return -EINVAL;
} }
......
...@@ -61,8 +61,6 @@ static inline void idxd_prep_desc_common(struct idxd_wq *wq, ...@@ -61,8 +61,6 @@ static inline void idxd_prep_desc_common(struct idxd_wq *wq,
u64 addr_f1, u64 addr_f2, u64 len, u64 addr_f1, u64 addr_f2, u64 len,
u64 compl, u32 flags) u64 compl, u32 flags)
{ {
struct idxd_device *idxd = wq->idxd;
hw->flags = flags; hw->flags = flags;
hw->opcode = opcode; hw->opcode = opcode;
hw->src_addr = addr_f1; hw->src_addr = addr_f1;
...@@ -70,13 +68,6 @@ static inline void idxd_prep_desc_common(struct idxd_wq *wq, ...@@ -70,13 +68,6 @@ static inline void idxd_prep_desc_common(struct idxd_wq *wq,
hw->xfer_size = len; hw->xfer_size = len;
hw->priv = !!(wq->type == IDXD_WQT_KERNEL); hw->priv = !!(wq->type == IDXD_WQT_KERNEL);
hw->completion_addr = compl; hw->completion_addr = compl;
/*
* Descriptor completion vectors are 1-8 for MSIX. We will round
* robin through the 8 vectors.
*/
wq->vec_ptr = (wq->vec_ptr % idxd->num_wq_irqs) + 1;
hw->int_handle = wq->vec_ptr;
} }
static struct dma_async_tx_descriptor * static struct dma_async_tx_descriptor *
......
...@@ -20,7 +20,8 @@ extern struct kmem_cache *idxd_desc_pool; ...@@ -20,7 +20,8 @@ extern struct kmem_cache *idxd_desc_pool;
enum idxd_type { enum idxd_type {
IDXD_TYPE_UNKNOWN = -1, IDXD_TYPE_UNKNOWN = -1,
IDXD_TYPE_DSA = 0, IDXD_TYPE_DSA = 0,
IDXD_TYPE_MAX IDXD_TYPE_IAX,
IDXD_TYPE_MAX,
}; };
#define IDXD_NAME_SIZE 128 #define IDXD_NAME_SIZE 128
...@@ -34,6 +35,11 @@ struct idxd_irq_entry { ...@@ -34,6 +35,11 @@ struct idxd_irq_entry {
int id; int id;
struct llist_head pending_llist; struct llist_head pending_llist;
struct list_head work_list; struct list_head work_list;
/*
* Lock to protect access between irq thread process descriptor
* and irq thread processing error descriptor.
*/
spinlock_t list_lock;
}; };
struct idxd_group { struct idxd_group {
...@@ -59,6 +65,7 @@ enum idxd_wq_state { ...@@ -59,6 +65,7 @@ enum idxd_wq_state {
enum idxd_wq_flag { enum idxd_wq_flag {
WQ_FLAG_DEDICATED = 0, WQ_FLAG_DEDICATED = 0,
WQ_FLAG_BLOCK_ON_FAULT,
}; };
enum idxd_wq_type { enum idxd_wq_type {
...@@ -86,10 +93,11 @@ enum idxd_op_type { ...@@ -86,10 +93,11 @@ enum idxd_op_type {
enum idxd_complete_type { enum idxd_complete_type {
IDXD_COMPLETE_NORMAL = 0, IDXD_COMPLETE_NORMAL = 0,
IDXD_COMPLETE_ABORT, IDXD_COMPLETE_ABORT,
IDXD_COMPLETE_DEV_FAIL,
}; };
struct idxd_wq { struct idxd_wq {
void __iomem *dportal; void __iomem *portal;
struct device conf_dev; struct device conf_dev;
struct idxd_cdev idxd_cdev; struct idxd_cdev idxd_cdev;
struct idxd_device *idxd; struct idxd_device *idxd;
...@@ -107,8 +115,13 @@ struct idxd_wq { ...@@ -107,8 +115,13 @@ struct idxd_wq {
u32 vec_ptr; /* interrupt steering */ u32 vec_ptr; /* interrupt steering */
struct dsa_hw_desc **hw_descs; struct dsa_hw_desc **hw_descs;
int num_descs; int num_descs;
struct dsa_completion_record *compls; union {
struct dsa_completion_record *compls;
struct iax_completion_record *iax_compls;
};
void *compls_raw;
dma_addr_t compls_addr; dma_addr_t compls_addr;
dma_addr_t compls_addr_raw;
int compls_size; int compls_size;
struct idxd_desc **descs; struct idxd_desc **descs;
struct sbitmap_queue sbq; struct sbitmap_queue sbq;
...@@ -116,6 +129,7 @@ struct idxd_wq { ...@@ -116,6 +129,7 @@ struct idxd_wq {
char name[WQ_NAME_SIZE + 1]; char name[WQ_NAME_SIZE + 1];
u64 max_xfer_bytes; u64 max_xfer_bytes;
u32 max_batch_size; u32 max_batch_size;
bool ats_dis;
}; };
struct idxd_engine { struct idxd_engine {
...@@ -145,6 +159,7 @@ enum idxd_device_state { ...@@ -145,6 +159,7 @@ enum idxd_device_state {
enum idxd_device_flag { enum idxd_device_flag {
IDXD_FLAG_CONFIGURABLE = 0, IDXD_FLAG_CONFIGURABLE = 0,
IDXD_FLAG_CMD_RUNNING, IDXD_FLAG_CMD_RUNNING,
IDXD_FLAG_PASID_ENABLED,
}; };
struct idxd_device { struct idxd_device {
...@@ -167,6 +182,9 @@ struct idxd_device { ...@@ -167,6 +182,9 @@ struct idxd_device {
struct idxd_wq *wqs; struct idxd_wq *wqs;
struct idxd_engine *engines; struct idxd_engine *engines;
struct iommu_sva *sva;
unsigned int pasid;
int num_groups; int num_groups;
u32 msix_perm_offset; u32 msix_perm_offset;
...@@ -184,6 +202,7 @@ struct idxd_device { ...@@ -184,6 +202,7 @@ struct idxd_device {
int token_limit; int token_limit;
int nr_tokens; /* non-reserved tokens */ int nr_tokens; /* non-reserved tokens */
unsigned int wqcfg_size; unsigned int wqcfg_size;
int compl_size;
union sw_err_reg sw_err; union sw_err_reg sw_err;
wait_queue_head_t cmd_waitq; wait_queue_head_t cmd_waitq;
...@@ -198,9 +217,15 @@ struct idxd_device { ...@@ -198,9 +217,15 @@ struct idxd_device {
/* IDXD software descriptor */ /* IDXD software descriptor */
struct idxd_desc { struct idxd_desc {
struct dsa_hw_desc *hw; union {
struct dsa_hw_desc *hw;
struct iax_hw_desc *iax_hw;
};
dma_addr_t desc_dma; dma_addr_t desc_dma;
struct dsa_completion_record *completion; union {
struct dsa_completion_record *completion;
struct iax_completion_record *iax_completion;
};
dma_addr_t compl_dma; dma_addr_t compl_dma;
struct dma_async_tx_descriptor txd; struct dma_async_tx_descriptor txd;
struct llist_node llnode; struct llist_node llnode;
...@@ -214,12 +239,30 @@ struct idxd_desc { ...@@ -214,12 +239,30 @@ struct idxd_desc {
#define confdev_to_wq(dev) container_of(dev, struct idxd_wq, conf_dev) #define confdev_to_wq(dev) container_of(dev, struct idxd_wq, conf_dev)
extern struct bus_type dsa_bus_type; extern struct bus_type dsa_bus_type;
extern struct bus_type iax_bus_type;
extern bool support_enqcmd;
static inline bool wq_dedicated(struct idxd_wq *wq) static inline bool wq_dedicated(struct idxd_wq *wq)
{ {
return test_bit(WQ_FLAG_DEDICATED, &wq->flags); return test_bit(WQ_FLAG_DEDICATED, &wq->flags);
} }
static inline bool wq_shared(struct idxd_wq *wq)
{
return !test_bit(WQ_FLAG_DEDICATED, &wq->flags);
}
static inline bool device_pasid_enabled(struct idxd_device *idxd)
{
return test_bit(IDXD_FLAG_PASID_ENABLED, &idxd->flags);
}
static inline bool device_swq_supported(struct idxd_device *idxd)
{
return (support_enqcmd && device_pasid_enabled(idxd));
}
enum idxd_portal_prot { enum idxd_portal_prot {
IDXD_PORTAL_UNLIMITED = 0, IDXD_PORTAL_UNLIMITED = 0,
IDXD_PORTAL_LIMITED, IDXD_PORTAL_LIMITED,
...@@ -242,6 +285,8 @@ static inline void idxd_set_type(struct idxd_device *idxd) ...@@ -242,6 +285,8 @@ static inline void idxd_set_type(struct idxd_device *idxd)
if (pdev->device == PCI_DEVICE_ID_INTEL_DSA_SPR0) if (pdev->device == PCI_DEVICE_ID_INTEL_DSA_SPR0)
idxd->type = IDXD_TYPE_DSA; idxd->type = IDXD_TYPE_DSA;
else if (pdev->device == PCI_DEVICE_ID_INTEL_IAX_SPR0)
idxd->type = IDXD_TYPE_IAX;
else else
idxd->type = IDXD_TYPE_UNKNOWN; idxd->type = IDXD_TYPE_UNKNOWN;
} }
...@@ -288,6 +333,7 @@ void idxd_device_reset(struct idxd_device *idxd); ...@@ -288,6 +333,7 @@ void idxd_device_reset(struct idxd_device *idxd);
void idxd_device_cleanup(struct idxd_device *idxd); void idxd_device_cleanup(struct idxd_device *idxd);
int idxd_device_config(struct idxd_device *idxd); int idxd_device_config(struct idxd_device *idxd);
void idxd_device_wqs_clear_state(struct idxd_device *idxd); void idxd_device_wqs_clear_state(struct idxd_device *idxd);
void idxd_device_drain_pasid(struct idxd_device *idxd, int pasid);
/* work queue control */ /* work queue control */
int idxd_wq_alloc_resources(struct idxd_wq *wq); int idxd_wq_alloc_resources(struct idxd_wq *wq);
...@@ -298,6 +344,8 @@ void idxd_wq_drain(struct idxd_wq *wq); ...@@ -298,6 +344,8 @@ void idxd_wq_drain(struct idxd_wq *wq);
int idxd_wq_map_portal(struct idxd_wq *wq); int idxd_wq_map_portal(struct idxd_wq *wq);
void idxd_wq_unmap_portal(struct idxd_wq *wq); void idxd_wq_unmap_portal(struct idxd_wq *wq);
void idxd_wq_disable_cleanup(struct idxd_wq *wq); void idxd_wq_disable_cleanup(struct idxd_wq *wq);
int idxd_wq_set_pasid(struct idxd_wq *wq, int pasid);
int idxd_wq_disable_pasid(struct idxd_wq *wq);
/* submission */ /* submission */
int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc); int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc);
......
...@@ -14,6 +14,8 @@ ...@@ -14,6 +14,8 @@
#include <linux/io-64-nonatomic-lo-hi.h> #include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/idr.h> #include <linux/idr.h>
#include <linux/intel-svm.h>
#include <linux/iommu.h>
#include <uapi/linux/idxd.h> #include <uapi/linux/idxd.h>
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
#include "../dmaengine.h" #include "../dmaengine.h"
...@@ -26,18 +28,24 @@ MODULE_AUTHOR("Intel Corporation"); ...@@ -26,18 +28,24 @@ MODULE_AUTHOR("Intel Corporation");
#define DRV_NAME "idxd" #define DRV_NAME "idxd"
bool support_enqcmd;
static struct idr idxd_idrs[IDXD_TYPE_MAX]; static struct idr idxd_idrs[IDXD_TYPE_MAX];
static struct mutex idxd_idr_lock; static struct mutex idxd_idr_lock;
static struct pci_device_id idxd_pci_tbl[] = { static struct pci_device_id idxd_pci_tbl[] = {
/* DSA ver 1.0 platforms */ /* DSA ver 1.0 platforms */
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_DSA_SPR0) }, { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_DSA_SPR0) },
/* IAX ver 1.0 platforms */
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IAX_SPR0) },
{ 0, } { 0, }
}; };
MODULE_DEVICE_TABLE(pci, idxd_pci_tbl); MODULE_DEVICE_TABLE(pci, idxd_pci_tbl);
static char *idxd_name[] = { static char *idxd_name[] = {
"dsa", "dsa",
"iax"
}; };
const char *idxd_get_dev_name(struct idxd_device *idxd) const char *idxd_get_dev_name(struct idxd_device *idxd)
...@@ -53,6 +61,7 @@ static int idxd_setup_interrupts(struct idxd_device *idxd) ...@@ -53,6 +61,7 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
struct idxd_irq_entry *irq_entry; struct idxd_irq_entry *irq_entry;
int i, msixcnt; int i, msixcnt;
int rc = 0; int rc = 0;
union msix_perm mperm;
msixcnt = pci_msix_vec_count(pdev); msixcnt = pci_msix_vec_count(pdev);
if (msixcnt < 0) { if (msixcnt < 0) {
...@@ -92,6 +101,7 @@ static int idxd_setup_interrupts(struct idxd_device *idxd) ...@@ -92,6 +101,7 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
for (i = 0; i < msixcnt; i++) { for (i = 0; i < msixcnt; i++) {
idxd->irq_entries[i].id = i; idxd->irq_entries[i].id = i;
idxd->irq_entries[i].idxd = idxd; idxd->irq_entries[i].idxd = idxd;
spin_lock_init(&idxd->irq_entries[i].list_lock);
} }
msix = &idxd->msix_entries[0]; msix = &idxd->msix_entries[0];
...@@ -131,6 +141,13 @@ static int idxd_setup_interrupts(struct idxd_device *idxd) ...@@ -131,6 +141,13 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
idxd_unmask_error_interrupts(idxd); idxd_unmask_error_interrupts(idxd);
/* Setup MSIX permission table */
mperm.bits = 0;
mperm.pasid = idxd->pasid;
mperm.pasid_en = device_pasid_enabled(idxd);
for (i = 1; i < msixcnt; i++)
iowrite32(mperm.bits, idxd->reg_base + idxd->msix_perm_offset + i * 8);
return 0; return 0;
err_no_irq: err_no_irq:
...@@ -201,17 +218,14 @@ static void idxd_read_table_offsets(struct idxd_device *idxd) ...@@ -201,17 +218,14 @@ static void idxd_read_table_offsets(struct idxd_device *idxd)
struct device *dev = &idxd->pdev->dev; struct device *dev = &idxd->pdev->dev;
offsets.bits[0] = ioread64(idxd->reg_base + IDXD_TABLE_OFFSET); offsets.bits[0] = ioread64(idxd->reg_base + IDXD_TABLE_OFFSET);
offsets.bits[1] = ioread64(idxd->reg_base + IDXD_TABLE_OFFSET offsets.bits[1] = ioread64(idxd->reg_base + IDXD_TABLE_OFFSET + sizeof(u64));
+ sizeof(u64)); idxd->grpcfg_offset = offsets.grpcfg * IDXD_TABLE_MULT;
idxd->grpcfg_offset = offsets.grpcfg * 0x100;
dev_dbg(dev, "IDXD Group Config Offset: %#x\n", idxd->grpcfg_offset); dev_dbg(dev, "IDXD Group Config Offset: %#x\n", idxd->grpcfg_offset);
idxd->wqcfg_offset = offsets.wqcfg * 0x100; idxd->wqcfg_offset = offsets.wqcfg * IDXD_TABLE_MULT;
dev_dbg(dev, "IDXD Work Queue Config Offset: %#x\n", dev_dbg(dev, "IDXD Work Queue Config Offset: %#x\n", idxd->wqcfg_offset);
idxd->wqcfg_offset); idxd->msix_perm_offset = offsets.msix_perm * IDXD_TABLE_MULT;
idxd->msix_perm_offset = offsets.msix_perm * 0x100; dev_dbg(dev, "IDXD MSIX Permission Offset: %#x\n", idxd->msix_perm_offset);
dev_dbg(dev, "IDXD MSIX Permission Offset: %#x\n", idxd->perfmon_offset = offsets.perfmon * IDXD_TABLE_MULT;
idxd->msix_perm_offset);
idxd->perfmon_offset = offsets.perfmon * 0x100;
dev_dbg(dev, "IDXD Perfmon Offset: %#x\n", idxd->perfmon_offset); dev_dbg(dev, "IDXD Perfmon Offset: %#x\n", idxd->perfmon_offset);
} }
...@@ -265,8 +279,7 @@ static void idxd_read_caps(struct idxd_device *idxd) ...@@ -265,8 +279,7 @@ static void idxd_read_caps(struct idxd_device *idxd)
} }
} }
static struct idxd_device *idxd_alloc(struct pci_dev *pdev, static struct idxd_device *idxd_alloc(struct pci_dev *pdev)
void __iomem * const *iomap)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct idxd_device *idxd; struct idxd_device *idxd;
...@@ -276,12 +289,45 @@ static struct idxd_device *idxd_alloc(struct pci_dev *pdev, ...@@ -276,12 +289,45 @@ static struct idxd_device *idxd_alloc(struct pci_dev *pdev,
return NULL; return NULL;
idxd->pdev = pdev; idxd->pdev = pdev;
idxd->reg_base = iomap[IDXD_MMIO_BAR];
spin_lock_init(&idxd->dev_lock); spin_lock_init(&idxd->dev_lock);
return idxd; return idxd;
} }
static int idxd_enable_system_pasid(struct idxd_device *idxd)
{
int flags;
unsigned int pasid;
struct iommu_sva *sva;
flags = SVM_FLAG_SUPERVISOR_MODE;
sva = iommu_sva_bind_device(&idxd->pdev->dev, NULL, &flags);
if (IS_ERR(sva)) {
dev_warn(&idxd->pdev->dev,
"iommu sva bind failed: %ld\n", PTR_ERR(sva));
return PTR_ERR(sva);
}
pasid = iommu_sva_get_pasid(sva);
if (pasid == IOMMU_PASID_INVALID) {
iommu_sva_unbind_device(sva);
return -ENODEV;
}
idxd->sva = sva;
idxd->pasid = pasid;
dev_dbg(&idxd->pdev->dev, "system pasid: %u\n", pasid);
return 0;
}
static void idxd_disable_system_pasid(struct idxd_device *idxd)
{
iommu_sva_unbind_device(idxd->sva);
idxd->sva = NULL;
}
static int idxd_probe(struct idxd_device *idxd) static int idxd_probe(struct idxd_device *idxd)
{ {
struct pci_dev *pdev = idxd->pdev; struct pci_dev *pdev = idxd->pdev;
...@@ -292,6 +338,14 @@ static int idxd_probe(struct idxd_device *idxd) ...@@ -292,6 +338,14 @@ static int idxd_probe(struct idxd_device *idxd)
idxd_device_init_reset(idxd); idxd_device_init_reset(idxd);
dev_dbg(dev, "IDXD reset complete\n"); dev_dbg(dev, "IDXD reset complete\n");
if (IS_ENABLED(CONFIG_INTEL_IDXD_SVM)) {
rc = idxd_enable_system_pasid(idxd);
if (rc < 0)
dev_warn(dev, "Failed to enable PASID. No SVA support: %d\n", rc);
else
set_bit(IDXD_FLAG_PASID_ENABLED, &idxd->flags);
}
idxd_read_caps(idxd); idxd_read_caps(idxd);
idxd_read_table_offsets(idxd); idxd_read_table_offsets(idxd);
...@@ -322,29 +376,37 @@ static int idxd_probe(struct idxd_device *idxd) ...@@ -322,29 +376,37 @@ static int idxd_probe(struct idxd_device *idxd)
idxd_mask_error_interrupts(idxd); idxd_mask_error_interrupts(idxd);
idxd_mask_msix_vectors(idxd); idxd_mask_msix_vectors(idxd);
err_setup: err_setup:
if (device_pasid_enabled(idxd))
idxd_disable_system_pasid(idxd);
return rc; return rc;
} }
static void idxd_type_init(struct idxd_device *idxd)
{
if (idxd->type == IDXD_TYPE_DSA)
idxd->compl_size = sizeof(struct dsa_completion_record);
else if (idxd->type == IDXD_TYPE_IAX)
idxd->compl_size = sizeof(struct iax_completion_record);
}
static int idxd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) static int idxd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{ {
void __iomem * const *iomap;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct idxd_device *idxd; struct idxd_device *idxd;
int rc; int rc;
unsigned int mask;
rc = pcim_enable_device(pdev); rc = pcim_enable_device(pdev);
if (rc) if (rc)
return rc; return rc;
dev_dbg(dev, "Mapping BARs\n"); dev_dbg(dev, "Alloc IDXD context\n");
mask = (1 << IDXD_MMIO_BAR); idxd = idxd_alloc(pdev);
rc = pcim_iomap_regions(pdev, mask, DRV_NAME); if (!idxd)
if (rc) return -ENOMEM;
return rc;
iomap = pcim_iomap_table(pdev); dev_dbg(dev, "Mapping BARs\n");
if (!iomap) idxd->reg_base = pcim_iomap(pdev, IDXD_MMIO_BAR, 0);
if (!idxd->reg_base)
return -ENOMEM; return -ENOMEM;
dev_dbg(dev, "Set DMA masks\n"); dev_dbg(dev, "Set DMA masks\n");
...@@ -360,13 +422,10 @@ static int idxd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) ...@@ -360,13 +422,10 @@ static int idxd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (rc) if (rc)
return rc; return rc;
dev_dbg(dev, "Alloc IDXD context\n");
idxd = idxd_alloc(pdev, iomap);
if (!idxd)
return -ENOMEM;
idxd_set_type(idxd); idxd_set_type(idxd);
idxd_type_init(idxd);
dev_dbg(dev, "Set PCI master\n"); dev_dbg(dev, "Set PCI master\n");
pci_set_master(pdev); pci_set_master(pdev);
pci_set_drvdata(pdev, idxd); pci_set_drvdata(pdev, idxd);
...@@ -452,6 +511,8 @@ static void idxd_remove(struct pci_dev *pdev) ...@@ -452,6 +511,8 @@ static void idxd_remove(struct pci_dev *pdev)
dev_dbg(&pdev->dev, "%s called\n", __func__); dev_dbg(&pdev->dev, "%s called\n", __func__);
idxd_cleanup_sysfs(idxd); idxd_cleanup_sysfs(idxd);
idxd_shutdown(pdev); idxd_shutdown(pdev);
if (device_pasid_enabled(idxd))
idxd_disable_system_pasid(idxd);
mutex_lock(&idxd_idr_lock); mutex_lock(&idxd_idr_lock);
idr_remove(&idxd_idrs[idxd->type], idxd->id); idr_remove(&idxd_idrs[idxd->type], idxd->id);
mutex_unlock(&idxd_idr_lock); mutex_unlock(&idxd_idr_lock);
...@@ -470,7 +531,7 @@ static int __init idxd_init_module(void) ...@@ -470,7 +531,7 @@ static int __init idxd_init_module(void)
int err, i; int err, i;
/* /*
* If the CPU does not support write512, there's no point in * If the CPU does not support MOVDIR64B or ENQCMDS, there's no point in
* enumerating the device. We can not utilize it. * enumerating the device. We can not utilize it.
*/ */
if (!boot_cpu_has(X86_FEATURE_MOVDIR64B)) { if (!boot_cpu_has(X86_FEATURE_MOVDIR64B)) {
...@@ -478,8 +539,10 @@ static int __init idxd_init_module(void) ...@@ -478,8 +539,10 @@ static int __init idxd_init_module(void)
return -ENODEV; return -ENODEV;
} }
pr_info("%s: Intel(R) Accelerator Devices Driver %s\n", if (!boot_cpu_has(X86_FEATURE_ENQCMD))
DRV_NAME, IDXD_DRIVER_VERSION); pr_warn("Platform does not have ENQCMD(S) support.\n");
else
support_enqcmd = true;
mutex_init(&idxd_idr_lock); mutex_init(&idxd_idr_lock);
for (i = 0; i < IDXD_TYPE_MAX; i++) for (i = 0; i < IDXD_TYPE_MAX; i++)
......
...@@ -11,6 +11,24 @@ ...@@ -11,6 +11,24 @@
#include "idxd.h" #include "idxd.h"
#include "registers.h" #include "registers.h"
enum irq_work_type {
IRQ_WORK_NORMAL = 0,
IRQ_WORK_PROCESS_FAULT,
};
struct idxd_fault {
struct work_struct work;
u64 addr;
struct idxd_device *idxd;
};
static int irq_process_work_list(struct idxd_irq_entry *irq_entry,
enum irq_work_type wtype,
int *processed, u64 data);
static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry,
enum irq_work_type wtype,
int *processed, u64 data);
static void idxd_device_reinit(struct work_struct *work) static void idxd_device_reinit(struct work_struct *work)
{ {
struct idxd_device *idxd = container_of(work, struct idxd_device, work); struct idxd_device *idxd = container_of(work, struct idxd_device, work);
...@@ -44,6 +62,46 @@ static void idxd_device_reinit(struct work_struct *work) ...@@ -44,6 +62,46 @@ static void idxd_device_reinit(struct work_struct *work)
idxd_device_wqs_clear_state(idxd); idxd_device_wqs_clear_state(idxd);
} }
static void idxd_device_fault_work(struct work_struct *work)
{
struct idxd_fault *fault = container_of(work, struct idxd_fault, work);
struct idxd_irq_entry *ie;
int i;
int processed;
int irqcnt = fault->idxd->num_wq_irqs + 1;
for (i = 1; i < irqcnt; i++) {
ie = &fault->idxd->irq_entries[i];
irq_process_work_list(ie, IRQ_WORK_PROCESS_FAULT,
&processed, fault->addr);
if (processed)
break;
irq_process_pending_llist(ie, IRQ_WORK_PROCESS_FAULT,
&processed, fault->addr);
if (processed)
break;
}
kfree(fault);
}
static int idxd_device_schedule_fault_process(struct idxd_device *idxd,
u64 fault_addr)
{
struct idxd_fault *fault;
fault = kmalloc(sizeof(*fault), GFP_ATOMIC);
if (!fault)
return -ENOMEM;
fault->addr = fault_addr;
fault->idxd = idxd;
INIT_WORK(&fault->work, idxd_device_fault_work);
queue_work(idxd->wq, &fault->work);
return 0;
}
irqreturn_t idxd_irq_handler(int vec, void *data) irqreturn_t idxd_irq_handler(int vec, void *data)
{ {
struct idxd_irq_entry *irq_entry = data; struct idxd_irq_entry *irq_entry = data;
...@@ -125,6 +183,15 @@ irqreturn_t idxd_misc_thread(int vec, void *data) ...@@ -125,6 +183,15 @@ irqreturn_t idxd_misc_thread(int vec, void *data)
if (!err) if (!err)
goto out; goto out;
/*
* This case should rarely happen and typically is due to software
* programming error by the driver.
*/
if (idxd->sw_err.valid &&
idxd->sw_err.desc_valid &&
idxd->sw_err.fault_addr)
idxd_device_schedule_fault_process(idxd, idxd->sw_err.fault_addr);
gensts.bits = ioread32(idxd->reg_base + IDXD_GENSTATS_OFFSET); gensts.bits = ioread32(idxd->reg_base + IDXD_GENSTATS_OFFSET);
if (gensts.state == IDXD_DEVICE_STATE_HALT) { if (gensts.state == IDXD_DEVICE_STATE_HALT) {
idxd->state = IDXD_DEV_HALTED; idxd->state = IDXD_DEV_HALTED;
...@@ -152,57 +219,110 @@ irqreturn_t idxd_misc_thread(int vec, void *data) ...@@ -152,57 +219,110 @@ irqreturn_t idxd_misc_thread(int vec, void *data)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static bool process_fault(struct idxd_desc *desc, u64 fault_addr)
{
/*
* Completion address can be bad as well. Check fault address match for descriptor
* and completion address.
*/
if ((u64)desc->hw == fault_addr ||
(u64)desc->completion == fault_addr) {
idxd_dma_complete_txd(desc, IDXD_COMPLETE_DEV_FAIL);
return true;
}
return false;
}
static bool complete_desc(struct idxd_desc *desc)
{
if (desc->completion->status) {
idxd_dma_complete_txd(desc, IDXD_COMPLETE_NORMAL);
return true;
}
return false;
}
static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry, static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry,
int *processed) enum irq_work_type wtype,
int *processed, u64 data)
{ {
struct idxd_desc *desc, *t; struct idxd_desc *desc, *t;
struct llist_node *head; struct llist_node *head;
int queued = 0; int queued = 0;
bool completed = false;
unsigned long flags;
*processed = 0; *processed = 0;
head = llist_del_all(&irq_entry->pending_llist); head = llist_del_all(&irq_entry->pending_llist);
if (!head) if (!head)
return 0; goto out;
llist_for_each_entry_safe(desc, t, head, llnode) { llist_for_each_entry_safe(desc, t, head, llnode) {
if (desc->completion->status) { if (wtype == IRQ_WORK_NORMAL)
idxd_dma_complete_txd(desc, IDXD_COMPLETE_NORMAL); completed = complete_desc(desc);
else if (wtype == IRQ_WORK_PROCESS_FAULT)
completed = process_fault(desc, data);
if (completed) {
idxd_free_desc(desc->wq, desc); idxd_free_desc(desc->wq, desc);
(*processed)++; (*processed)++;
if (wtype == IRQ_WORK_PROCESS_FAULT)
break;
} else { } else {
list_add_tail(&desc->list, &irq_entry->work_list); spin_lock_irqsave(&irq_entry->list_lock, flags);
list_add_tail(&desc->list,
&irq_entry->work_list);
spin_unlock_irqrestore(&irq_entry->list_lock, flags);
queued++; queued++;
} }
} }
out:
return queued; return queued;
} }
static int irq_process_work_list(struct idxd_irq_entry *irq_entry, static int irq_process_work_list(struct idxd_irq_entry *irq_entry,
int *processed) enum irq_work_type wtype,
int *processed, u64 data)
{ {
struct list_head *node, *next; struct list_head *node, *next;
int queued = 0; int queued = 0;
bool completed = false;
unsigned long flags;
*processed = 0; *processed = 0;
spin_lock_irqsave(&irq_entry->list_lock, flags);
if (list_empty(&irq_entry->work_list)) if (list_empty(&irq_entry->work_list))
return 0; goto out;
list_for_each_safe(node, next, &irq_entry->work_list) { list_for_each_safe(node, next, &irq_entry->work_list) {
struct idxd_desc *desc = struct idxd_desc *desc =
container_of(node, struct idxd_desc, list); container_of(node, struct idxd_desc, list);
if (desc->completion->status) { spin_unlock_irqrestore(&irq_entry->list_lock, flags);
if (wtype == IRQ_WORK_NORMAL)
completed = complete_desc(desc);
else if (wtype == IRQ_WORK_PROCESS_FAULT)
completed = process_fault(desc, data);
if (completed) {
spin_lock_irqsave(&irq_entry->list_lock, flags);
list_del(&desc->list); list_del(&desc->list);
/* process and callback */ spin_unlock_irqrestore(&irq_entry->list_lock, flags);
idxd_dma_complete_txd(desc, IDXD_COMPLETE_NORMAL);
idxd_free_desc(desc->wq, desc); idxd_free_desc(desc->wq, desc);
(*processed)++; (*processed)++;
if (wtype == IRQ_WORK_PROCESS_FAULT)
return queued;
} else { } else {
queued++; queued++;
} }
spin_lock_irqsave(&irq_entry->list_lock, flags);
} }
out:
spin_unlock_irqrestore(&irq_entry->list_lock, flags);
return queued; return queued;
} }
...@@ -230,12 +350,14 @@ static int idxd_desc_process(struct idxd_irq_entry *irq_entry) ...@@ -230,12 +350,14 @@ static int idxd_desc_process(struct idxd_irq_entry *irq_entry)
* 5. Repeat until no more descriptors. * 5. Repeat until no more descriptors.
*/ */
do { do {
rc = irq_process_work_list(irq_entry, &processed); rc = irq_process_work_list(irq_entry, IRQ_WORK_NORMAL,
&processed, 0);
total += processed; total += processed;
if (rc != 0) if (rc != 0)
continue; continue;
rc = irq_process_pending_llist(irq_entry, &processed); rc = irq_process_pending_llist(irq_entry, IRQ_WORK_NORMAL,
&processed, 0);
total += processed; total += processed;
} while (rc != 0); } while (rc != 0);
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
/* PCI Config */ /* PCI Config */
#define PCI_DEVICE_ID_INTEL_DSA_SPR0 0x0b25 #define PCI_DEVICE_ID_INTEL_DSA_SPR0 0x0b25
#define PCI_DEVICE_ID_INTEL_IAX_SPR0 0x0cfe
#define IDXD_MMIO_BAR 0 #define IDXD_MMIO_BAR 0
#define IDXD_WQ_BAR 2 #define IDXD_WQ_BAR 2
...@@ -47,7 +48,7 @@ union wq_cap_reg { ...@@ -47,7 +48,7 @@ union wq_cap_reg {
u64 rsvd:20; u64 rsvd:20;
u64 shared_mode:1; u64 shared_mode:1;
u64 dedicated_mode:1; u64 dedicated_mode:1;
u64 rsvd2:1; u64 wq_ats_support:1;
u64 priority:1; u64 priority:1;
u64 occupancy:1; u64 occupancy:1;
u64 occupancy_int:1; u64 occupancy_int:1;
...@@ -102,6 +103,8 @@ union offsets_reg { ...@@ -102,6 +103,8 @@ union offsets_reg {
u64 bits[2]; u64 bits[2];
} __packed; } __packed;
#define IDXD_TABLE_MULT 0x100
#define IDXD_GENCFG_OFFSET 0x80 #define IDXD_GENCFG_OFFSET 0x80
union gencfg_reg { union gencfg_reg {
struct { struct {
...@@ -301,7 +304,8 @@ union wqcfg { ...@@ -301,7 +304,8 @@ union wqcfg {
/* bytes 8-11 */ /* bytes 8-11 */
u32 mode:1; /* shared or dedicated */ u32 mode:1; /* shared or dedicated */
u32 bof:1; /* block on fault */ u32 bof:1; /* block on fault */
u32 rsvd2:2; u32 wq_ats_disable:1;
u32 rsvd2:1;
u32 priority:4; u32 priority:4;
u32 pasid:20; u32 pasid:20;
u32 pasid_en:1; u32 pasid_en:1;
...@@ -336,6 +340,8 @@ union wqcfg { ...@@ -336,6 +340,8 @@ union wqcfg {
u32 bits[8]; u32 bits[8];
} __packed; } __packed;
#define WQCFG_PASID_IDX 2
/* /*
* This macro calculates the offset into the WQCFG register * This macro calculates the offset into the WQCFG register
* idxd - struct idxd * * idxd - struct idxd *
...@@ -354,4 +360,22 @@ union wqcfg { ...@@ -354,4 +360,22 @@ union wqcfg {
#define WQCFG_STRIDES(_idxd_dev) ((_idxd_dev)->wqcfg_size / sizeof(u32)) #define WQCFG_STRIDES(_idxd_dev) ((_idxd_dev)->wqcfg_size / sizeof(u32))
#define GRPCFG_SIZE 64
#define GRPWQCFG_STRIDES 4
/*
* This macro calculates the offset into the GRPCFG register
* idxd - struct idxd *
* n - wq id
* ofs - the index of the 32b dword for the config register
*
* The WQCFG register block is divided into groups per each wq. The n index
* allows us to move to the register group that's for that particular wq.
* Each register is 32bits. The ofs gives us the number of register to access.
*/
#define GRPWQCFG_OFFSET(idxd_dev, n, ofs) ((idxd_dev)->grpcfg_offset +\
(n) * GRPCFG_SIZE + sizeof(u64) * (ofs))
#define GRPENGCFG_OFFSET(idxd_dev, n) ((idxd_dev)->grpcfg_offset + (n) * GRPCFG_SIZE + 32)
#define GRPFLGCFG_OFFSET(idxd_dev, n) ((idxd_dev)->grpcfg_offset + (n) * GRPCFG_SIZE + 40)
#endif #endif
...@@ -11,11 +11,22 @@ ...@@ -11,11 +11,22 @@
static struct idxd_desc *__get_desc(struct idxd_wq *wq, int idx, int cpu) static struct idxd_desc *__get_desc(struct idxd_wq *wq, int idx, int cpu)
{ {
struct idxd_desc *desc; struct idxd_desc *desc;
struct idxd_device *idxd = wq->idxd;
desc = wq->descs[idx]; desc = wq->descs[idx];
memset(desc->hw, 0, sizeof(struct dsa_hw_desc)); memset(desc->hw, 0, sizeof(struct dsa_hw_desc));
memset(desc->completion, 0, sizeof(struct dsa_completion_record)); memset(desc->completion, 0, idxd->compl_size);
desc->cpu = cpu; desc->cpu = cpu;
if (device_pasid_enabled(idxd))
desc->hw->pasid = idxd->pasid;
/*
* Descriptor completion vectors are 1-8 for MSIX. We will round
* robin through the 8 vectors.
*/
wq->vec_ptr = (wq->vec_ptr % idxd->num_wq_irqs) + 1;
desc->hw->int_handle = wq->vec_ptr;
return desc; return desc;
} }
...@@ -70,18 +81,32 @@ int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc) ...@@ -70,18 +81,32 @@ int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
struct idxd_device *idxd = wq->idxd; struct idxd_device *idxd = wq->idxd;
int vec = desc->hw->int_handle; int vec = desc->hw->int_handle;
void __iomem *portal; void __iomem *portal;
int rc;
if (idxd->state != IDXD_DEV_ENABLED) if (idxd->state != IDXD_DEV_ENABLED)
return -EIO; return -EIO;
portal = wq->dportal; portal = wq->portal;
/* /*
* The wmb() flushes writes to coherent DMA data before possibly * The wmb() flushes writes to coherent DMA data before
* triggering a DMA read. The wmb() is necessary even on UP because * possibly triggering a DMA read. The wmb() is necessary
* the recipient is a device. * even on UP because the recipient is a device.
*/ */
wmb(); wmb();
iosubmit_cmds512(portal, desc->hw, 1); if (wq_dedicated(wq)) {
iosubmit_cmds512(portal, desc->hw, 1);
} else {
/*
* It's not likely that we would receive queue full rejection
* since the descriptor allocation gates at wq size. If we
* receive a -EAGAIN, that means something went wrong such as the
* device is not accepting descriptor at all.
*/
rc = enqcmds(portal, desc->hw);
if (rc < 0)
return rc;
}
/* /*
* Pending the descriptor to the lockless list for the irq_entry * Pending the descriptor to the lockless list for the irq_entry
......
...@@ -41,14 +41,24 @@ static struct device_type dsa_device_type = { ...@@ -41,14 +41,24 @@ static struct device_type dsa_device_type = {
.release = idxd_conf_device_release, .release = idxd_conf_device_release,
}; };
static struct device_type iax_device_type = {
.name = "iax",
.release = idxd_conf_device_release,
};
static inline bool is_dsa_dev(struct device *dev) static inline bool is_dsa_dev(struct device *dev)
{ {
return dev ? dev->type == &dsa_device_type : false; return dev ? dev->type == &dsa_device_type : false;
} }
static inline bool is_iax_dev(struct device *dev)
{
return dev ? dev->type == &iax_device_type : false;
}
static inline bool is_idxd_dev(struct device *dev) static inline bool is_idxd_dev(struct device *dev)
{ {
return is_dsa_dev(dev); return is_dsa_dev(dev) || is_iax_dev(dev);
} }
static inline bool is_idxd_wq_dev(struct device *dev) static inline bool is_idxd_wq_dev(struct device *dev)
...@@ -175,6 +185,30 @@ static int idxd_config_bus_probe(struct device *dev) ...@@ -175,6 +185,30 @@ static int idxd_config_bus_probe(struct device *dev)
return -EINVAL; return -EINVAL;
} }
/* Shared WQ checks */
if (wq_shared(wq)) {
if (!device_swq_supported(idxd)) {
dev_warn(dev,
"PASID not enabled and shared WQ.\n");
mutex_unlock(&wq->wq_lock);
return -ENXIO;
}
/*
* Shared wq with the threshold set to 0 means the user
* did not set the threshold or transitioned from a
* dedicated wq but did not set threshold. A value
* of 0 would effectively disable the shared wq. The
* driver does not allow a value of 0 to be set for
* threshold via sysfs.
*/
if (wq->threshold == 0) {
dev_warn(dev,
"Shared WQ and threshold 0.\n");
mutex_unlock(&wq->wq_lock);
return -EINVAL;
}
}
rc = idxd_wq_alloc_resources(wq); rc = idxd_wq_alloc_resources(wq);
if (rc < 0) { if (rc < 0) {
mutex_unlock(&wq->wq_lock); mutex_unlock(&wq->wq_lock);
...@@ -335,8 +369,17 @@ struct bus_type dsa_bus_type = { ...@@ -335,8 +369,17 @@ struct bus_type dsa_bus_type = {
.shutdown = idxd_config_bus_shutdown, .shutdown = idxd_config_bus_shutdown,
}; };
struct bus_type iax_bus_type = {
.name = "iax",
.match = idxd_config_bus_match,
.probe = idxd_config_bus_probe,
.remove = idxd_config_bus_remove,
.shutdown = idxd_config_bus_shutdown,
};
static struct bus_type *idxd_bus_types[] = { static struct bus_type *idxd_bus_types[] = {
&dsa_bus_type &dsa_bus_type,
&iax_bus_type
}; };
static struct idxd_device_driver dsa_drv = { static struct idxd_device_driver dsa_drv = {
...@@ -348,8 +391,18 @@ static struct idxd_device_driver dsa_drv = { ...@@ -348,8 +391,18 @@ static struct idxd_device_driver dsa_drv = {
}, },
}; };
static struct idxd_device_driver iax_drv = {
.drv = {
.name = "iax",
.bus = &iax_bus_type,
.owner = THIS_MODULE,
.mod_name = KBUILD_MODNAME,
},
};
static struct idxd_device_driver *idxd_drvs[] = { static struct idxd_device_driver *idxd_drvs[] = {
&dsa_drv &dsa_drv,
&iax_drv
}; };
struct bus_type *idxd_get_bus_type(struct idxd_device *idxd) struct bus_type *idxd_get_bus_type(struct idxd_device *idxd)
...@@ -361,6 +414,8 @@ static struct device_type *idxd_get_device_type(struct idxd_device *idxd) ...@@ -361,6 +414,8 @@ static struct device_type *idxd_get_device_type(struct idxd_device *idxd)
{ {
if (idxd->type == IDXD_TYPE_DSA) if (idxd->type == IDXD_TYPE_DSA)
return &dsa_device_type; return &dsa_device_type;
else if (idxd->type == IDXD_TYPE_IAX)
return &iax_device_type;
else else
return NULL; return NULL;
} }
...@@ -501,6 +556,9 @@ static ssize_t group_tokens_reserved_store(struct device *dev, ...@@ -501,6 +556,9 @@ static ssize_t group_tokens_reserved_store(struct device *dev,
if (rc < 0) if (rc < 0)
return -EINVAL; return -EINVAL;
if (idxd->type == IDXD_TYPE_IAX)
return -EOPNOTSUPP;
if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags)) if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
return -EPERM; return -EPERM;
...@@ -546,6 +604,9 @@ static ssize_t group_tokens_allowed_store(struct device *dev, ...@@ -546,6 +604,9 @@ static ssize_t group_tokens_allowed_store(struct device *dev,
if (rc < 0) if (rc < 0)
return -EINVAL; return -EINVAL;
if (idxd->type == IDXD_TYPE_IAX)
return -EOPNOTSUPP;
if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags)) if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
return -EPERM; return -EPERM;
...@@ -588,6 +649,9 @@ static ssize_t group_use_token_limit_store(struct device *dev, ...@@ -588,6 +649,9 @@ static ssize_t group_use_token_limit_store(struct device *dev,
if (rc < 0) if (rc < 0)
return -EINVAL; return -EINVAL;
if (idxd->type == IDXD_TYPE_IAX)
return -EOPNOTSUPP;
if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags)) if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
return -EPERM; return -EPERM;
...@@ -875,6 +939,8 @@ static ssize_t wq_mode_store(struct device *dev, ...@@ -875,6 +939,8 @@ static ssize_t wq_mode_store(struct device *dev,
if (sysfs_streq(buf, "dedicated")) { if (sysfs_streq(buf, "dedicated")) {
set_bit(WQ_FLAG_DEDICATED, &wq->flags); set_bit(WQ_FLAG_DEDICATED, &wq->flags);
wq->threshold = 0; wq->threshold = 0;
} else if (sysfs_streq(buf, "shared") && device_swq_supported(idxd)) {
clear_bit(WQ_FLAG_DEDICATED, &wq->flags);
} else { } else {
return -EINVAL; return -EINVAL;
} }
...@@ -973,6 +1039,87 @@ static ssize_t wq_priority_store(struct device *dev, ...@@ -973,6 +1039,87 @@ static ssize_t wq_priority_store(struct device *dev,
static struct device_attribute dev_attr_wq_priority = static struct device_attribute dev_attr_wq_priority =
__ATTR(priority, 0644, wq_priority_show, wq_priority_store); __ATTR(priority, 0644, wq_priority_show, wq_priority_store);
static ssize_t wq_block_on_fault_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct idxd_wq *wq = container_of(dev, struct idxd_wq, conf_dev);
return sprintf(buf, "%u\n",
test_bit(WQ_FLAG_BLOCK_ON_FAULT, &wq->flags));
}
static ssize_t wq_block_on_fault_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct idxd_wq *wq = container_of(dev, struct idxd_wq, conf_dev);
struct idxd_device *idxd = wq->idxd;
bool bof;
int rc;
if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
return -EPERM;
if (wq->state != IDXD_WQ_DISABLED)
return -ENXIO;
rc = kstrtobool(buf, &bof);
if (rc < 0)
return rc;
if (bof)
set_bit(WQ_FLAG_BLOCK_ON_FAULT, &wq->flags);
else
clear_bit(WQ_FLAG_BLOCK_ON_FAULT, &wq->flags);
return count;
}
static struct device_attribute dev_attr_wq_block_on_fault =
__ATTR(block_on_fault, 0644, wq_block_on_fault_show,
wq_block_on_fault_store);
static ssize_t wq_threshold_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct idxd_wq *wq = container_of(dev, struct idxd_wq, conf_dev);
return sprintf(buf, "%u\n", wq->threshold);
}
static ssize_t wq_threshold_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct idxd_wq *wq = container_of(dev, struct idxd_wq, conf_dev);
struct idxd_device *idxd = wq->idxd;
unsigned int val;
int rc;
rc = kstrtouint(buf, 0, &val);
if (rc < 0)
return -EINVAL;
if (val > wq->size || val <= 0)
return -EINVAL;
if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
return -EPERM;
if (wq->state != IDXD_WQ_DISABLED)
return -ENXIO;
if (test_bit(WQ_FLAG_DEDICATED, &wq->flags))
return -EINVAL;
wq->threshold = val;
return count;
}
static struct device_attribute dev_attr_wq_threshold =
__ATTR(threshold, 0644, wq_threshold_show, wq_threshold_store);
static ssize_t wq_type_show(struct device *dev, static ssize_t wq_type_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
...@@ -1044,6 +1191,13 @@ static ssize_t wq_name_store(struct device *dev, ...@@ -1044,6 +1191,13 @@ static ssize_t wq_name_store(struct device *dev,
if (strlen(buf) > WQ_NAME_SIZE || strlen(buf) == 0) if (strlen(buf) > WQ_NAME_SIZE || strlen(buf) == 0)
return -EINVAL; return -EINVAL;
/*
* This is temporarily placed here until we have SVM support for
* dmaengine.
*/
if (wq->type == IDXD_WQT_KERNEL && device_pasid_enabled(wq->idxd))
return -EOPNOTSUPP;
memset(wq->name, 0, WQ_NAME_SIZE + 1); memset(wq->name, 0, WQ_NAME_SIZE + 1);
strncpy(wq->name, buf, WQ_NAME_SIZE); strncpy(wq->name, buf, WQ_NAME_SIZE);
strreplace(wq->name, '\n', '\0'); strreplace(wq->name, '\n', '\0');
...@@ -1147,6 +1301,39 @@ static ssize_t wq_max_batch_size_store(struct device *dev, struct device_attribu ...@@ -1147,6 +1301,39 @@ static ssize_t wq_max_batch_size_store(struct device *dev, struct device_attribu
static struct device_attribute dev_attr_wq_max_batch_size = static struct device_attribute dev_attr_wq_max_batch_size =
__ATTR(max_batch_size, 0644, wq_max_batch_size_show, wq_max_batch_size_store); __ATTR(max_batch_size, 0644, wq_max_batch_size_show, wq_max_batch_size_store);
static ssize_t wq_ats_disable_show(struct device *dev, struct device_attribute *attr, char *buf)
{
struct idxd_wq *wq = container_of(dev, struct idxd_wq, conf_dev);
return sprintf(buf, "%u\n", wq->ats_dis);
}
static ssize_t wq_ats_disable_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct idxd_wq *wq = container_of(dev, struct idxd_wq, conf_dev);
struct idxd_device *idxd = wq->idxd;
bool ats_dis;
int rc;
if (wq->state != IDXD_WQ_DISABLED)
return -EPERM;
if (!idxd->hw.wq_cap.wq_ats_support)
return -EOPNOTSUPP;
rc = kstrtobool(buf, &ats_dis);
if (rc < 0)
return rc;
wq->ats_dis = ats_dis;
return count;
}
static struct device_attribute dev_attr_wq_ats_disable =
__ATTR(ats_disable, 0644, wq_ats_disable_show, wq_ats_disable_store);
static struct attribute *idxd_wq_attributes[] = { static struct attribute *idxd_wq_attributes[] = {
&dev_attr_wq_clients.attr, &dev_attr_wq_clients.attr,
&dev_attr_wq_state.attr, &dev_attr_wq_state.attr,
...@@ -1154,11 +1341,14 @@ static struct attribute *idxd_wq_attributes[] = { ...@@ -1154,11 +1341,14 @@ static struct attribute *idxd_wq_attributes[] = {
&dev_attr_wq_mode.attr, &dev_attr_wq_mode.attr,
&dev_attr_wq_size.attr, &dev_attr_wq_size.attr,
&dev_attr_wq_priority.attr, &dev_attr_wq_priority.attr,
&dev_attr_wq_block_on_fault.attr,
&dev_attr_wq_threshold.attr,
&dev_attr_wq_type.attr, &dev_attr_wq_type.attr,
&dev_attr_wq_name.attr, &dev_attr_wq_name.attr,
&dev_attr_wq_cdev_minor.attr, &dev_attr_wq_cdev_minor.attr,
&dev_attr_wq_max_transfer_size.attr, &dev_attr_wq_max_transfer_size.attr,
&dev_attr_wq_max_batch_size.attr, &dev_attr_wq_max_batch_size.attr,
&dev_attr_wq_ats_disable.attr,
NULL, NULL,
}; };
...@@ -1305,6 +1495,16 @@ static ssize_t clients_show(struct device *dev, ...@@ -1305,6 +1495,16 @@ static ssize_t clients_show(struct device *dev,
} }
static DEVICE_ATTR_RO(clients); static DEVICE_ATTR_RO(clients);
static ssize_t pasid_enabled_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct idxd_device *idxd =
container_of(dev, struct idxd_device, conf_dev);
return sprintf(buf, "%u\n", device_pasid_enabled(idxd));
}
static DEVICE_ATTR_RO(pasid_enabled);
static ssize_t state_show(struct device *dev, static ssize_t state_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
...@@ -1424,6 +1624,7 @@ static struct attribute *idxd_device_attributes[] = { ...@@ -1424,6 +1624,7 @@ static struct attribute *idxd_device_attributes[] = {
&dev_attr_gen_cap.attr, &dev_attr_gen_cap.attr,
&dev_attr_configurable.attr, &dev_attr_configurable.attr,
&dev_attr_clients.attr, &dev_attr_clients.attr,
&dev_attr_pasid_enabled.attr,
&dev_attr_state.attr, &dev_attr_state.attr,
&dev_attr_errors.attr, &dev_attr_errors.attr,
&dev_attr_max_tokens.attr, &dev_attr_max_tokens.attr,
......
...@@ -191,32 +191,13 @@ struct imxdma_filter_data { ...@@ -191,32 +191,13 @@ struct imxdma_filter_data {
int request; int request;
}; };
static const struct platform_device_id imx_dma_devtype[] = {
{
.name = "imx1-dma",
.driver_data = IMX1_DMA,
}, {
.name = "imx21-dma",
.driver_data = IMX21_DMA,
}, {
.name = "imx27-dma",
.driver_data = IMX27_DMA,
}, {
/* sentinel */
}
};
MODULE_DEVICE_TABLE(platform, imx_dma_devtype);
static const struct of_device_id imx_dma_of_dev_id[] = { static const struct of_device_id imx_dma_of_dev_id[] = {
{ {
.compatible = "fsl,imx1-dma", .compatible = "fsl,imx1-dma", .data = (const void *)IMX1_DMA,
.data = &imx_dma_devtype[IMX1_DMA],
}, { }, {
.compatible = "fsl,imx21-dma", .compatible = "fsl,imx21-dma", .data = (const void *)IMX21_DMA,
.data = &imx_dma_devtype[IMX21_DMA],
}, { }, {
.compatible = "fsl,imx27-dma", .compatible = "fsl,imx27-dma", .data = (const void *)IMX27_DMA,
.data = &imx_dma_devtype[IMX27_DMA],
}, { }, {
/* sentinel */ /* sentinel */
} }
...@@ -1056,20 +1037,15 @@ static int __init imxdma_probe(struct platform_device *pdev) ...@@ -1056,20 +1037,15 @@ static int __init imxdma_probe(struct platform_device *pdev)
{ {
struct imxdma_engine *imxdma; struct imxdma_engine *imxdma;
struct resource *res; struct resource *res;
const struct of_device_id *of_id;
int ret, i; int ret, i;
int irq, irq_err; int irq, irq_err;
of_id = of_match_device(imx_dma_of_dev_id, &pdev->dev);
if (of_id)
pdev->id_entry = of_id->data;
imxdma = devm_kzalloc(&pdev->dev, sizeof(*imxdma), GFP_KERNEL); imxdma = devm_kzalloc(&pdev->dev, sizeof(*imxdma), GFP_KERNEL);
if (!imxdma) if (!imxdma)
return -ENOMEM; return -ENOMEM;
imxdma->dev = &pdev->dev; imxdma->dev = &pdev->dev;
imxdma->devtype = pdev->id_entry->driver_data; imxdma->devtype = (enum imx_dma_type)of_device_get_match_data(&pdev->dev);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
imxdma->base = devm_ioremap_resource(&pdev->dev, res); imxdma->base = devm_ioremap_resource(&pdev->dev, res);
...@@ -1263,7 +1239,6 @@ static struct platform_driver imxdma_driver = { ...@@ -1263,7 +1239,6 @@ static struct platform_driver imxdma_driver = {
.name = "imx-dma", .name = "imx-dma",
.of_match_table = imx_dma_of_dev_id, .of_match_table = imx_dma_of_dev_id,
}, },
.id_table = imx_dma_devtype,
.remove = imxdma_remove, .remove = imxdma_remove,
}; };
......
...@@ -566,37 +566,6 @@ static struct sdma_driver_data sdma_imx8mq = { ...@@ -566,37 +566,6 @@ static struct sdma_driver_data sdma_imx8mq = {
.check_ratio = 1, .check_ratio = 1,
}; };
static const struct platform_device_id sdma_devtypes[] = {
{
.name = "imx25-sdma",
.driver_data = (unsigned long)&sdma_imx25,
}, {
.name = "imx31-sdma",
.driver_data = (unsigned long)&sdma_imx31,
}, {
.name = "imx35-sdma",
.driver_data = (unsigned long)&sdma_imx35,
}, {
.name = "imx51-sdma",
.driver_data = (unsigned long)&sdma_imx51,
}, {
.name = "imx53-sdma",
.driver_data = (unsigned long)&sdma_imx53,
}, {
.name = "imx6q-sdma",
.driver_data = (unsigned long)&sdma_imx6q,
}, {
.name = "imx7d-sdma",
.driver_data = (unsigned long)&sdma_imx7d,
}, {
.name = "imx8mq-sdma",
.driver_data = (unsigned long)&sdma_imx8mq,
}, {
/* sentinel */
}
};
MODULE_DEVICE_TABLE(platform, sdma_devtypes);
static const struct of_device_id sdma_dt_ids[] = { static const struct of_device_id sdma_dt_ids[] = {
{ .compatible = "fsl,imx6q-sdma", .data = &sdma_imx6q, }, { .compatible = "fsl,imx6q-sdma", .data = &sdma_imx6q, },
{ .compatible = "fsl,imx53-sdma", .data = &sdma_imx53, }, { .compatible = "fsl,imx53-sdma", .data = &sdma_imx53, },
...@@ -1998,11 +1967,7 @@ static int sdma_probe(struct platform_device *pdev) ...@@ -1998,11 +1967,7 @@ static int sdma_probe(struct platform_device *pdev)
s32 *saddr_arr; s32 *saddr_arr;
const struct sdma_driver_data *drvdata = NULL; const struct sdma_driver_data *drvdata = NULL;
if (of_id) drvdata = of_id->data;
drvdata = of_id->data;
else if (pdev->id_entry)
drvdata = (void *)pdev->id_entry->driver_data;
if (!drvdata) { if (!drvdata) {
dev_err(&pdev->dev, "unable to find driver data\n"); dev_err(&pdev->dev, "unable to find driver data\n");
return -EINVAL; return -EINVAL;
...@@ -2211,7 +2176,6 @@ static struct platform_driver sdma_driver = { ...@@ -2211,7 +2176,6 @@ static struct platform_driver sdma_driver = {
.name = "imx-sdma", .name = "imx-sdma",
.of_match_table = sdma_dt_ids, .of_match_table = sdma_dt_ids,
}, },
.id_table = sdma_devtypes,
.remove = sdma_remove, .remove = sdma_remove,
.probe = sdma_probe, .probe = sdma_probe,
}; };
......
...@@ -1160,14 +1160,13 @@ static irqreturn_t idmac_interrupt(int irq, void *dev_id) ...@@ -1160,14 +1160,13 @@ static irqreturn_t idmac_interrupt(int irq, void *dev_id)
struct idmac_tx_desc *desc, *descnew; struct idmac_tx_desc *desc, *descnew;
bool done = false; bool done = false;
u32 ready0, ready1, curbuf, err; u32 ready0, ready1, curbuf, err;
unsigned long flags;
struct dmaengine_desc_callback cb; struct dmaengine_desc_callback cb;
/* IDMAC has cleared the respective BUFx_RDY bit, we manage the buffer */ /* IDMAC has cleared the respective BUFx_RDY bit, we manage the buffer */
dev_dbg(dev, "IDMAC irq %d, buf %d\n", irq, ichan->active_buffer); dev_dbg(dev, "IDMAC irq %d, buf %d\n", irq, ichan->active_buffer);
spin_lock_irqsave(&ipu_data.lock, flags); spin_lock(&ipu_data.lock);
ready0 = idmac_read_ipureg(&ipu_data, IPU_CHA_BUF0_RDY); ready0 = idmac_read_ipureg(&ipu_data, IPU_CHA_BUF0_RDY);
ready1 = idmac_read_ipureg(&ipu_data, IPU_CHA_BUF1_RDY); ready1 = idmac_read_ipureg(&ipu_data, IPU_CHA_BUF1_RDY);
...@@ -1176,7 +1175,7 @@ static irqreturn_t idmac_interrupt(int irq, void *dev_id) ...@@ -1176,7 +1175,7 @@ static irqreturn_t idmac_interrupt(int irq, void *dev_id)
if (err & (1 << chan_id)) { if (err & (1 << chan_id)) {
idmac_write_ipureg(&ipu_data, 1 << chan_id, IPU_INT_STAT_4); idmac_write_ipureg(&ipu_data, 1 << chan_id, IPU_INT_STAT_4);
spin_unlock_irqrestore(&ipu_data.lock, flags); spin_unlock(&ipu_data.lock);
/* /*
* Doing this * Doing this
* ichan->sg[0] = ichan->sg[1] = NULL; * ichan->sg[0] = ichan->sg[1] = NULL;
...@@ -1188,7 +1187,7 @@ static irqreturn_t idmac_interrupt(int irq, void *dev_id) ...@@ -1188,7 +1187,7 @@ static irqreturn_t idmac_interrupt(int irq, void *dev_id)
chan_id, ready0, ready1, curbuf); chan_id, ready0, ready1, curbuf);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
spin_unlock_irqrestore(&ipu_data.lock, flags); spin_unlock(&ipu_data.lock);
/* Other interrupts do not interfere with this channel */ /* Other interrupts do not interfere with this channel */
spin_lock(&ichan->lock); spin_lock(&ichan->lock);
...@@ -1251,9 +1250,9 @@ static irqreturn_t idmac_interrupt(int irq, void *dev_id) ...@@ -1251,9 +1250,9 @@ static irqreturn_t idmac_interrupt(int irq, void *dev_id)
if (unlikely(sgnew)) { if (unlikely(sgnew)) {
ipu_submit_buffer(ichan, descnew, sgnew, !ichan->active_buffer); ipu_submit_buffer(ichan, descnew, sgnew, !ichan->active_buffer);
} else { } else {
spin_lock_irqsave(&ipu_data.lock, flags); spin_lock(&ipu_data.lock);
ipu_ic_disable_task(&ipu_data, chan_id); ipu_ic_disable_task(&ipu_data, chan_id);
spin_unlock_irqrestore(&ipu_data.lock, flags); spin_unlock(&ipu_data.lock);
ichan->status = IPU_CHANNEL_READY; ichan->status = IPU_CHANNEL_READY;
/* Continue to check for complete descriptor */ /* Continue to check for complete descriptor */
} }
......
...@@ -223,24 +223,23 @@ static irqreturn_t k3_dma_int_handler(int irq, void *dev_id) ...@@ -223,24 +223,23 @@ static irqreturn_t k3_dma_int_handler(int irq, void *dev_id)
i = __ffs(stat); i = __ffs(stat);
stat &= ~BIT(i); stat &= ~BIT(i);
if (likely(tc1 & BIT(i)) || (tc2 & BIT(i))) { if (likely(tc1 & BIT(i)) || (tc2 & BIT(i))) {
unsigned long flags;
p = &d->phy[i]; p = &d->phy[i];
c = p->vchan; c = p->vchan;
if (c && (tc1 & BIT(i))) { if (c && (tc1 & BIT(i))) {
spin_lock_irqsave(&c->vc.lock, flags); spin_lock(&c->vc.lock);
if (p->ds_run != NULL) { if (p->ds_run != NULL) {
vchan_cookie_complete(&p->ds_run->vd); vchan_cookie_complete(&p->ds_run->vd);
p->ds_done = p->ds_run; p->ds_done = p->ds_run;
p->ds_run = NULL; p->ds_run = NULL;
} }
spin_unlock_irqrestore(&c->vc.lock, flags); spin_unlock(&c->vc.lock);
} }
if (c && (tc2 & BIT(i))) { if (c && (tc2 & BIT(i))) {
spin_lock_irqsave(&c->vc.lock, flags); spin_lock(&c->vc.lock);
if (p->ds_run != NULL) if (p->ds_run != NULL)
vchan_cyclic_callback(&p->ds_run->vd); vchan_cyclic_callback(&p->ds_run->vd);
spin_unlock_irqrestore(&c->vc.lock, flags); spin_unlock(&c->vc.lock);
} }
irq_chan |= BIT(i); irq_chan |= BIT(i);
} }
......
...@@ -160,10 +160,9 @@ static irqreturn_t milbeaut_xdmac_interrupt(int irq, void *dev_id) ...@@ -160,10 +160,9 @@ static irqreturn_t milbeaut_xdmac_interrupt(int irq, void *dev_id)
{ {
struct milbeaut_xdmac_chan *mc = dev_id; struct milbeaut_xdmac_chan *mc = dev_id;
struct milbeaut_xdmac_desc *md; struct milbeaut_xdmac_desc *md;
unsigned long flags;
u32 val; u32 val;
spin_lock_irqsave(&mc->vc.lock, flags); spin_lock(&mc->vc.lock);
/* Ack and Stop */ /* Ack and Stop */
val = FIELD_PREP(M10V_XDDSD_IS_MASK, 0x0); val = FIELD_PREP(M10V_XDDSD_IS_MASK, 0x0);
...@@ -177,7 +176,7 @@ static irqreturn_t milbeaut_xdmac_interrupt(int irq, void *dev_id) ...@@ -177,7 +176,7 @@ static irqreturn_t milbeaut_xdmac_interrupt(int irq, void *dev_id)
milbeaut_xdmac_start(mc); milbeaut_xdmac_start(mc);
out: out:
spin_unlock_irqrestore(&mc->vc.lock, flags); spin_unlock(&mc->vc.lock);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
......
...@@ -524,7 +524,6 @@ static irqreturn_t moxart_dma_interrupt(int irq, void *devid) ...@@ -524,7 +524,6 @@ static irqreturn_t moxart_dma_interrupt(int irq, void *devid)
struct moxart_dmadev *mc = devid; struct moxart_dmadev *mc = devid;
struct moxart_chan *ch = &mc->slave_chans[0]; struct moxart_chan *ch = &mc->slave_chans[0];
unsigned int i; unsigned int i;
unsigned long flags;
u32 ctrl; u32 ctrl;
dev_dbg(chan2dev(&ch->vc.chan), "%s\n", __func__); dev_dbg(chan2dev(&ch->vc.chan), "%s\n", __func__);
...@@ -541,14 +540,14 @@ static irqreturn_t moxart_dma_interrupt(int irq, void *devid) ...@@ -541,14 +540,14 @@ static irqreturn_t moxart_dma_interrupt(int irq, void *devid)
if (ctrl & APB_DMA_FIN_INT_STS) { if (ctrl & APB_DMA_FIN_INT_STS) {
ctrl &= ~APB_DMA_FIN_INT_STS; ctrl &= ~APB_DMA_FIN_INT_STS;
if (ch->desc) { if (ch->desc) {
spin_lock_irqsave(&ch->vc.lock, flags); spin_lock(&ch->vc.lock);
if (++ch->sgidx < ch->desc->sglen) { if (++ch->sgidx < ch->desc->sglen) {
moxart_dma_start_sg(ch, ch->sgidx); moxart_dma_start_sg(ch, ch->sgidx);
} else { } else {
vchan_cookie_complete(&ch->desc->vd); vchan_cookie_complete(&ch->desc->vd);
moxart_dma_start_desc(&ch->vc.chan); moxart_dma_start_desc(&ch->vc.chan);
} }
spin_unlock_irqrestore(&ch->vc.lock, flags); spin_unlock(&ch->vc.lock);
} }
} }
......
...@@ -1455,7 +1455,7 @@ static struct platform_driver mv_xor_driver = { ...@@ -1455,7 +1455,7 @@ static struct platform_driver mv_xor_driver = {
.resume = mv_xor_resume, .resume = mv_xor_resume,
.driver = { .driver = {
.name = MV_XOR_NAME, .name = MV_XOR_NAME,
.of_match_table = of_match_ptr(mv_xor_dt_ids), .of_match_table = mv_xor_dt_ids,
}, },
}; };
......
...@@ -771,8 +771,10 @@ static int mv_xor_v2_probe(struct platform_device *pdev) ...@@ -771,8 +771,10 @@ static int mv_xor_v2_probe(struct platform_device *pdev)
goto disable_clk; goto disable_clk;
msi_desc = first_msi_entry(&pdev->dev); msi_desc = first_msi_entry(&pdev->dev);
if (!msi_desc) if (!msi_desc) {
ret = -ENODEV;
goto free_msi_irqs; goto free_msi_irqs;
}
xor_dev->msi_desc = msi_desc; xor_dev->msi_desc = msi_desc;
ret = devm_request_irq(&pdev->dev, msi_desc->irq, ret = devm_request_irq(&pdev->dev, msi_desc->irq,
......
...@@ -167,29 +167,11 @@ static struct mxs_dma_type mxs_dma_types[] = { ...@@ -167,29 +167,11 @@ static struct mxs_dma_type mxs_dma_types[] = {
} }
}; };
static const struct platform_device_id mxs_dma_ids[] = {
{
.name = "imx23-dma-apbh",
.driver_data = (kernel_ulong_t) &mxs_dma_types[0],
}, {
.name = "imx23-dma-apbx",
.driver_data = (kernel_ulong_t) &mxs_dma_types[1],
}, {
.name = "imx28-dma-apbh",
.driver_data = (kernel_ulong_t) &mxs_dma_types[2],
}, {
.name = "imx28-dma-apbx",
.driver_data = (kernel_ulong_t) &mxs_dma_types[3],
}, {
/* end of list */
}
};
static const struct of_device_id mxs_dma_dt_ids[] = { static const struct of_device_id mxs_dma_dt_ids[] = {
{ .compatible = "fsl,imx23-dma-apbh", .data = &mxs_dma_ids[0], }, { .compatible = "fsl,imx23-dma-apbh", .data = &mxs_dma_types[0], },
{ .compatible = "fsl,imx23-dma-apbx", .data = &mxs_dma_ids[1], }, { .compatible = "fsl,imx23-dma-apbx", .data = &mxs_dma_types[1], },
{ .compatible = "fsl,imx28-dma-apbh", .data = &mxs_dma_ids[2], }, { .compatible = "fsl,imx28-dma-apbh", .data = &mxs_dma_types[2], },
{ .compatible = "fsl,imx28-dma-apbx", .data = &mxs_dma_ids[3], }, { .compatible = "fsl,imx28-dma-apbx", .data = &mxs_dma_types[3], },
{ /* sentinel */ } { /* sentinel */ }
}; };
MODULE_DEVICE_TABLE(of, mxs_dma_dt_ids); MODULE_DEVICE_TABLE(of, mxs_dma_dt_ids);
...@@ -762,8 +744,6 @@ static struct dma_chan *mxs_dma_xlate(struct of_phandle_args *dma_spec, ...@@ -762,8 +744,6 @@ static struct dma_chan *mxs_dma_xlate(struct of_phandle_args *dma_spec,
static int __init mxs_dma_probe(struct platform_device *pdev) static int __init mxs_dma_probe(struct platform_device *pdev)
{ {
struct device_node *np = pdev->dev.of_node; struct device_node *np = pdev->dev.of_node;
const struct platform_device_id *id_entry;
const struct of_device_id *of_id;
const struct mxs_dma_type *dma_type; const struct mxs_dma_type *dma_type;
struct mxs_dma_engine *mxs_dma; struct mxs_dma_engine *mxs_dma;
struct resource *iores; struct resource *iores;
...@@ -779,13 +759,7 @@ static int __init mxs_dma_probe(struct platform_device *pdev) ...@@ -779,13 +759,7 @@ static int __init mxs_dma_probe(struct platform_device *pdev)
return ret; return ret;
} }
of_id = of_match_device(mxs_dma_dt_ids, &pdev->dev); dma_type = (struct mxs_dma_type *)of_device_get_match_data(&pdev->dev);
if (of_id)
id_entry = of_id->data;
else
id_entry = platform_get_device_id(pdev);
dma_type = (struct mxs_dma_type *)id_entry->driver_data;
mxs_dma->type = dma_type->type; mxs_dma->type = dma_type->type;
mxs_dma->dev_id = dma_type->id; mxs_dma->dev_id = dma_type->id;
...@@ -865,7 +839,6 @@ static struct platform_driver mxs_dma_driver = { ...@@ -865,7 +839,6 @@ static struct platform_driver mxs_dma_driver = {
.name = "mxs-dma", .name = "mxs-dma",
.of_match_table = mxs_dma_dt_ids, .of_match_table = mxs_dma_dt_ids,
}, },
.id_table = mxs_dma_ids,
}; };
static int __init mxs_dma_module_init(void) static int __init mxs_dma_module_init(void)
......
...@@ -75,8 +75,18 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec, ...@@ -75,8 +75,18 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
ofdma->dma_router->route_free(ofdma->dma_router->dev, ofdma->dma_router->route_free(ofdma->dma_router->dev,
route_data); route_data);
} else { } else {
int ret = 0;
chan->router = ofdma->dma_router; chan->router = ofdma->dma_router;
chan->route_data = route_data; chan->route_data = route_data;
if (chan->device->device_router_config)
ret = chan->device->device_router_config(chan);
if (ret) {
dma_release_channel(chan);
chan = ERR_PTR(ret);
}
} }
/* /*
......
...@@ -1527,8 +1527,6 @@ static int pl330_submit_req(struct pl330_thread *thrd, ...@@ -1527,8 +1527,6 @@ static int pl330_submit_req(struct pl330_thread *thrd,
/* First dry run to check if req is acceptable */ /* First dry run to check if req is acceptable */
ret = _setup_req(pl330, 1, thrd, idx, &xs); ret = _setup_req(pl330, 1, thrd, idx, &xs);
if (ret < 0)
goto xfer_exit;
if (ret > pl330->mcbufsz / 2) { if (ret > pl330->mcbufsz / 2) {
dev_info(pl330->ddma.dev, "%s:%d Try increasing mcbufsz (%i/%i)\n", dev_info(pl330->ddma.dev, "%s:%d Try increasing mcbufsz (%i/%i)\n",
......
...@@ -69,7 +69,7 @@ struct ppc_dma_chan_ref { ...@@ -69,7 +69,7 @@ struct ppc_dma_chan_ref {
}; };
/* The list of channels exported by ppc440spe ADMA */ /* The list of channels exported by ppc440spe ADMA */
struct list_head static struct list_head
ppc440spe_adma_chan_list = LIST_HEAD_INIT(ppc440spe_adma_chan_list); ppc440spe_adma_chan_list = LIST_HEAD_INIT(ppc440spe_adma_chan_list);
/* This flag is set when want to refetch the xor chain in the interrupt /* This flag is set when want to refetch the xor chain in the interrupt
...@@ -559,7 +559,6 @@ static void ppc440spe_desc_set_src_mult(struct ppc440spe_adma_desc_slot *desc, ...@@ -559,7 +559,6 @@ static void ppc440spe_desc_set_src_mult(struct ppc440spe_adma_desc_slot *desc,
int sg_index, unsigned char mult_value) int sg_index, unsigned char mult_value)
{ {
struct dma_cdb *dma_hw_desc; struct dma_cdb *dma_hw_desc;
struct xor_cb *xor_hw_desc;
u32 *psgu; u32 *psgu;
switch (chan->device->id) { switch (chan->device->id) {
...@@ -590,7 +589,6 @@ static void ppc440spe_desc_set_src_mult(struct ppc440spe_adma_desc_slot *desc, ...@@ -590,7 +589,6 @@ static void ppc440spe_desc_set_src_mult(struct ppc440spe_adma_desc_slot *desc,
*psgu |= cpu_to_le32(mult_value << mult_index); *psgu |= cpu_to_le32(mult_value << mult_index);
break; break;
case PPC440SPE_XOR_ID: case PPC440SPE_XOR_ID:
xor_hw_desc = desc->hw_desc;
break; break;
default: default:
BUG(); BUG();
......
...@@ -606,7 +606,6 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id) ...@@ -606,7 +606,6 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id)
struct pxad_chan *chan = phy->vchan; struct pxad_chan *chan = phy->vchan;
struct virt_dma_desc *vd, *tmp; struct virt_dma_desc *vd, *tmp;
unsigned int dcsr; unsigned int dcsr;
unsigned long flags;
bool vd_completed; bool vd_completed;
dma_cookie_t last_started = 0; dma_cookie_t last_started = 0;
...@@ -616,7 +615,7 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id) ...@@ -616,7 +615,7 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id)
if (dcsr & PXA_DCSR_RUN) if (dcsr & PXA_DCSR_RUN)
return IRQ_NONE; return IRQ_NONE;
spin_lock_irqsave(&chan->vc.lock, flags); spin_lock(&chan->vc.lock);
list_for_each_entry_safe(vd, tmp, &chan->vc.desc_issued, node) { list_for_each_entry_safe(vd, tmp, &chan->vc.desc_issued, node) {
vd_completed = is_desc_completed(vd); vd_completed = is_desc_completed(vd);
dev_dbg(&chan->vc.chan.dev->device, dev_dbg(&chan->vc.chan.dev->device,
...@@ -658,7 +657,7 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id) ...@@ -658,7 +657,7 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id)
pxad_launch_chan(chan, to_pxad_sw_desc(vd)); pxad_launch_chan(chan, to_pxad_sw_desc(vd));
} }
} }
spin_unlock_irqrestore(&chan->vc.lock, flags); spin_unlock(&chan->vc.lock);
wake_up(&chan->wq_state); wake_up(&chan->wq_state);
return IRQ_HANDLED; return IRQ_HANDLED;
......
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
config QCOM_ADM
tristate "Qualcomm ADM support"
depends on (ARCH_QCOM || COMPILE_TEST) && !PHYS_ADDR_T_64BIT
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Enable support for the Qualcomm Application Data Mover (ADM) DMA
controller, as present on MSM8x60, APQ8064, and IPQ8064 devices.
This controller provides DMA capabilities for both general purpose
and on-chip peripheral devices.
config QCOM_BAM_DMA config QCOM_BAM_DMA
tristate "QCOM BAM DMA support" tristate "QCOM BAM DMA support"
depends on ARCH_QCOM || (COMPILE_TEST && OF && ARM) depends on ARCH_QCOM || (COMPILE_TEST && OF && ARM)
...@@ -8,6 +19,18 @@ config QCOM_BAM_DMA ...@@ -8,6 +19,18 @@ config QCOM_BAM_DMA
Enable support for the QCOM BAM DMA controller. This controller Enable support for the QCOM BAM DMA controller. This controller
provides DMA capabilities for a variety of on-chip devices. provides DMA capabilities for a variety of on-chip devices.
config QCOM_GPI_DMA
tristate "Qualcomm Technologies GPI DMA support"
depends on ARCH_QCOM
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Enable support for the QCOM GPI DMA controller. This controller
provides DMA capabilities for a variety of peripheral buses such
as I2C, UART, and SPI. By using GPI dmaengine driver, bus drivers
can use a standardize interface that is protocol independent to
transfer data between DDR and peripheral.
config QCOM_HIDMA_MGMT config QCOM_HIDMA_MGMT
tristate "Qualcomm Technologies HIDMA Management support" tristate "Qualcomm Technologies HIDMA Management support"
select DMA_ENGINE select DMA_ENGINE
......
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_QCOM_ADM) += qcom_adm.o
obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
obj-$(CONFIG_QCOM_GPI_DMA) += gpi.o
obj-$(CONFIG_QCOM_HIDMA_MGMT) += hdma_mgmt.o obj-$(CONFIG_QCOM_HIDMA_MGMT) += hdma_mgmt.o
hdma_mgmt-objs := hidma_mgmt.o hidma_mgmt_sys.o hdma_mgmt-objs := hidma_mgmt.o hidma_mgmt_sys.o
obj-$(CONFIG_QCOM_HIDMA) += hdma.o obj-$(CONFIG_QCOM_HIDMA) += hdma.o
......
...@@ -875,7 +875,7 @@ static irqreturn_t bam_dma_irq(int irq, void *data) ...@@ -875,7 +875,7 @@ static irqreturn_t bam_dma_irq(int irq, void *data)
ret = bam_pm_runtime_get_sync(bdev->dev); ret = bam_pm_runtime_get_sync(bdev->dev);
if (ret < 0) if (ret < 0)
return ret; return IRQ_NONE;
if (srcs & BAM_IRQ) { if (srcs & BAM_IRQ) {
clr_mask = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_STTS)); clr_mask = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_STTS));
......
此差异已折叠。
此差异已折叠。
...@@ -326,10 +326,9 @@ static irqreturn_t sf_pdma_done_isr(int irq, void *dev_id) ...@@ -326,10 +326,9 @@ static irqreturn_t sf_pdma_done_isr(int irq, void *dev_id)
{ {
struct sf_pdma_chan *chan = dev_id; struct sf_pdma_chan *chan = dev_id;
struct pdma_regs *regs = &chan->regs; struct pdma_regs *regs = &chan->regs;
unsigned long flags;
u64 residue; u64 residue;
spin_lock_irqsave(&chan->vchan.lock, flags); spin_lock(&chan->vchan.lock);
writel((readl(regs->ctrl)) & ~PDMA_DONE_STATUS_MASK, regs->ctrl); writel((readl(regs->ctrl)) & ~PDMA_DONE_STATUS_MASK, regs->ctrl);
residue = readq(regs->residue); residue = readq(regs->residue);
...@@ -346,7 +345,7 @@ static irqreturn_t sf_pdma_done_isr(int irq, void *dev_id) ...@@ -346,7 +345,7 @@ static irqreturn_t sf_pdma_done_isr(int irq, void *dev_id)
sf_pdma_xfer_desc(chan); sf_pdma_xfer_desc(chan);
} }
spin_unlock_irqrestore(&chan->vchan.lock, flags); spin_unlock(&chan->vchan.lock);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
...@@ -355,11 +354,10 @@ static irqreturn_t sf_pdma_err_isr(int irq, void *dev_id) ...@@ -355,11 +354,10 @@ static irqreturn_t sf_pdma_err_isr(int irq, void *dev_id)
{ {
struct sf_pdma_chan *chan = dev_id; struct sf_pdma_chan *chan = dev_id;
struct pdma_regs *regs = &chan->regs; struct pdma_regs *regs = &chan->regs;
unsigned long flags;
spin_lock_irqsave(&chan->lock, flags); spin_lock(&chan->lock);
writel((readl(regs->ctrl)) & ~PDMA_ERR_STATUS_MASK, regs->ctrl); writel((readl(regs->ctrl)) & ~PDMA_ERR_STATUS_MASK, regs->ctrl);
spin_unlock_irqrestore(&chan->lock, flags); spin_unlock(&chan->lock);
tasklet_schedule(&chan->err_tasklet); tasklet_schedule(&chan->err_tasklet);
...@@ -584,7 +582,7 @@ static struct platform_driver sf_pdma_driver = { ...@@ -584,7 +582,7 @@ static struct platform_driver sf_pdma_driver = {
.remove = sf_pdma_remove, .remove = sf_pdma_remove,
.driver = { .driver = {
.name = "sf-pdma", .name = "sf-pdma",
.of_match_table = of_match_ptr(sf_pdma_dt_ids), .of_match_table = sf_pdma_dt_ids,
}, },
}; };
......
...@@ -1643,13 +1643,12 @@ static irqreturn_t d40_handle_interrupt(int irq, void *data) ...@@ -1643,13 +1643,12 @@ static irqreturn_t d40_handle_interrupt(int irq, void *data)
u32 row; u32 row;
long chan = -1; long chan = -1;
struct d40_chan *d40c; struct d40_chan *d40c;
unsigned long flags;
struct d40_base *base = data; struct d40_base *base = data;
u32 *regs = base->regs_interrupt; u32 *regs = base->regs_interrupt;
struct d40_interrupt_lookup *il = base->gen_dmac.il; struct d40_interrupt_lookup *il = base->gen_dmac.il;
u32 il_size = base->gen_dmac.il_size; u32 il_size = base->gen_dmac.il_size;
spin_lock_irqsave(&base->interrupt_lock, flags); spin_lock(&base->interrupt_lock);
/* Read interrupt status of both logical and physical channels */ /* Read interrupt status of both logical and physical channels */
for (i = 0; i < il_size; i++) for (i = 0; i < il_size; i++)
...@@ -1694,7 +1693,7 @@ static irqreturn_t d40_handle_interrupt(int irq, void *data) ...@@ -1694,7 +1693,7 @@ static irqreturn_t d40_handle_interrupt(int irq, void *data)
spin_unlock(&d40c->lock); spin_unlock(&d40c->lock);
} }
spin_unlock_irqrestore(&base->interrupt_lock, flags); spin_unlock(&base->interrupt_lock);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
......
...@@ -264,9 +264,11 @@ static int stm32_dma_get_width(struct stm32_dma_chan *chan, ...@@ -264,9 +264,11 @@ static int stm32_dma_get_width(struct stm32_dma_chan *chan,
} }
static enum dma_slave_buswidth stm32_dma_get_max_width(u32 buf_len, static enum dma_slave_buswidth stm32_dma_get_max_width(u32 buf_len,
dma_addr_t buf_addr,
u32 threshold) u32 threshold)
{ {
enum dma_slave_buswidth max_width; enum dma_slave_buswidth max_width;
u64 addr = buf_addr;
if (threshold == STM32_DMA_FIFO_THRESHOLD_FULL) if (threshold == STM32_DMA_FIFO_THRESHOLD_FULL)
max_width = DMA_SLAVE_BUSWIDTH_4_BYTES; max_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
...@@ -277,6 +279,9 @@ static enum dma_slave_buswidth stm32_dma_get_max_width(u32 buf_len, ...@@ -277,6 +279,9 @@ static enum dma_slave_buswidth stm32_dma_get_max_width(u32 buf_len,
max_width > DMA_SLAVE_BUSWIDTH_1_BYTE) max_width > DMA_SLAVE_BUSWIDTH_1_BYTE)
max_width = max_width >> 1; max_width = max_width >> 1;
if (do_div(addr, max_width))
max_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
return max_width; return max_width;
} }
...@@ -648,21 +653,12 @@ static irqreturn_t stm32_dma_chan_irq(int irq, void *devid) ...@@ -648,21 +653,12 @@ static irqreturn_t stm32_dma_chan_irq(int irq, void *devid)
scr = stm32_dma_read(dmadev, STM32_DMA_SCR(chan->id)); scr = stm32_dma_read(dmadev, STM32_DMA_SCR(chan->id));
sfcr = stm32_dma_read(dmadev, STM32_DMA_SFCR(chan->id)); sfcr = stm32_dma_read(dmadev, STM32_DMA_SFCR(chan->id));
if (status & STM32_DMA_TCI) {
stm32_dma_irq_clear(chan, STM32_DMA_TCI);
if (scr & STM32_DMA_SCR_TCIE)
stm32_dma_handle_chan_done(chan);
status &= ~STM32_DMA_TCI;
}
if (status & STM32_DMA_HTI) {
stm32_dma_irq_clear(chan, STM32_DMA_HTI);
status &= ~STM32_DMA_HTI;
}
if (status & STM32_DMA_FEI) { if (status & STM32_DMA_FEI) {
stm32_dma_irq_clear(chan, STM32_DMA_FEI); stm32_dma_irq_clear(chan, STM32_DMA_FEI);
status &= ~STM32_DMA_FEI; status &= ~STM32_DMA_FEI;
if (sfcr & STM32_DMA_SFCR_FEIE) { if (sfcr & STM32_DMA_SFCR_FEIE) {
if (!(scr & STM32_DMA_SCR_EN)) if (!(scr & STM32_DMA_SCR_EN) &&
!(status & STM32_DMA_TCI))
dev_err(chan2dev(chan), "FIFO Error\n"); dev_err(chan2dev(chan), "FIFO Error\n");
else else
dev_dbg(chan2dev(chan), "FIFO over/underrun\n"); dev_dbg(chan2dev(chan), "FIFO over/underrun\n");
...@@ -674,6 +670,19 @@ static irqreturn_t stm32_dma_chan_irq(int irq, void *devid) ...@@ -674,6 +670,19 @@ static irqreturn_t stm32_dma_chan_irq(int irq, void *devid)
if (sfcr & STM32_DMA_SCR_DMEIE) if (sfcr & STM32_DMA_SCR_DMEIE)
dev_dbg(chan2dev(chan), "Direct mode overrun\n"); dev_dbg(chan2dev(chan), "Direct mode overrun\n");
} }
if (status & STM32_DMA_TCI) {
stm32_dma_irq_clear(chan, STM32_DMA_TCI);
if (scr & STM32_DMA_SCR_TCIE)
stm32_dma_handle_chan_done(chan);
status &= ~STM32_DMA_TCI;
}
if (status & STM32_DMA_HTI) {
stm32_dma_irq_clear(chan, STM32_DMA_HTI);
status &= ~STM32_DMA_HTI;
}
if (status) { if (status) {
stm32_dma_irq_clear(chan, status); stm32_dma_irq_clear(chan, status);
dev_err(chan2dev(chan), "DMA error: status=0x%08x\n", status); dev_err(chan2dev(chan), "DMA error: status=0x%08x\n", status);
...@@ -703,7 +712,7 @@ static void stm32_dma_issue_pending(struct dma_chan *c) ...@@ -703,7 +712,7 @@ static void stm32_dma_issue_pending(struct dma_chan *c)
static int stm32_dma_set_xfer_param(struct stm32_dma_chan *chan, static int stm32_dma_set_xfer_param(struct stm32_dma_chan *chan,
enum dma_transfer_direction direction, enum dma_transfer_direction direction,
enum dma_slave_buswidth *buswidth, enum dma_slave_buswidth *buswidth,
u32 buf_len) u32 buf_len, dma_addr_t buf_addr)
{ {
enum dma_slave_buswidth src_addr_width, dst_addr_width; enum dma_slave_buswidth src_addr_width, dst_addr_width;
int src_bus_width, dst_bus_width; int src_bus_width, dst_bus_width;
...@@ -735,7 +744,8 @@ static int stm32_dma_set_xfer_param(struct stm32_dma_chan *chan, ...@@ -735,7 +744,8 @@ static int stm32_dma_set_xfer_param(struct stm32_dma_chan *chan,
return dst_burst_size; return dst_burst_size;
/* Set memory data size */ /* Set memory data size */
src_addr_width = stm32_dma_get_max_width(buf_len, fifoth); src_addr_width = stm32_dma_get_max_width(buf_len, buf_addr,
fifoth);
chan->mem_width = src_addr_width; chan->mem_width = src_addr_width;
src_bus_width = stm32_dma_get_width(chan, src_addr_width); src_bus_width = stm32_dma_get_width(chan, src_addr_width);
if (src_bus_width < 0) if (src_bus_width < 0)
...@@ -784,7 +794,8 @@ static int stm32_dma_set_xfer_param(struct stm32_dma_chan *chan, ...@@ -784,7 +794,8 @@ static int stm32_dma_set_xfer_param(struct stm32_dma_chan *chan,
return src_burst_size; return src_burst_size;
/* Set memory data size */ /* Set memory data size */
dst_addr_width = stm32_dma_get_max_width(buf_len, fifoth); dst_addr_width = stm32_dma_get_max_width(buf_len, buf_addr,
fifoth);
chan->mem_width = dst_addr_width; chan->mem_width = dst_addr_width;
dst_bus_width = stm32_dma_get_width(chan, dst_addr_width); dst_bus_width = stm32_dma_get_width(chan, dst_addr_width);
if (dst_bus_width < 0) if (dst_bus_width < 0)
...@@ -872,7 +883,8 @@ static struct dma_async_tx_descriptor *stm32_dma_prep_slave_sg( ...@@ -872,7 +883,8 @@ static struct dma_async_tx_descriptor *stm32_dma_prep_slave_sg(
for_each_sg(sgl, sg, sg_len, i) { for_each_sg(sgl, sg, sg_len, i) {
ret = stm32_dma_set_xfer_param(chan, direction, &buswidth, ret = stm32_dma_set_xfer_param(chan, direction, &buswidth,
sg_dma_len(sg)); sg_dma_len(sg),
sg_dma_address(sg));
if (ret < 0) if (ret < 0)
goto err; goto err;
...@@ -940,7 +952,8 @@ static struct dma_async_tx_descriptor *stm32_dma_prep_dma_cyclic( ...@@ -940,7 +952,8 @@ static struct dma_async_tx_descriptor *stm32_dma_prep_dma_cyclic(
return NULL; return NULL;
} }
ret = stm32_dma_set_xfer_param(chan, direction, &buswidth, period_len); ret = stm32_dma_set_xfer_param(chan, direction, &buswidth, period_len,
buf_addr);
if (ret < 0) if (ret < 0)
return NULL; return NULL;
...@@ -1216,6 +1229,8 @@ static void stm32_dma_free_chan_resources(struct dma_chan *c) ...@@ -1216,6 +1229,8 @@ static void stm32_dma_free_chan_resources(struct dma_chan *c)
pm_runtime_put(dmadev->ddev.dev); pm_runtime_put(dmadev->ddev.dev);
vchan_free_chan_resources(to_virt_chan(c)); vchan_free_chan_resources(to_virt_chan(c));
stm32_dma_clear_reg(&chan->chan_reg);
chan->threshold = 0;
} }
static void stm32_dma_desc_free(struct virt_dma_desc *vdesc) static void stm32_dma_desc_free(struct virt_dma_desc *vdesc)
......
...@@ -168,7 +168,7 @@ static void *stm32_dmamux_route_allocate(struct of_phandle_args *dma_spec, ...@@ -168,7 +168,7 @@ static void *stm32_dmamux_route_allocate(struct of_phandle_args *dma_spec,
return ERR_PTR(ret); return ERR_PTR(ret);
} }
static const struct of_device_id stm32_stm32dma_master_match[] = { static const struct of_device_id stm32_stm32dma_master_match[] __maybe_unused = {
{ .compatible = "st,stm32-dma", }, { .compatible = "st,stm32-dma", },
{}, {},
}; };
......
...@@ -339,7 +339,7 @@ static struct stm32_mdma_desc *stm32_mdma_alloc_desc( ...@@ -339,7 +339,7 @@ static struct stm32_mdma_desc *stm32_mdma_alloc_desc(
struct stm32_mdma_desc *desc; struct stm32_mdma_desc *desc;
int i; int i;
desc = kzalloc(offsetof(typeof(*desc), node[count]), GFP_NOWAIT); desc = kzalloc(struct_size(desc, node, count), GFP_NOWAIT);
if (!desc) if (!desc)
return NULL; return NULL;
...@@ -1346,7 +1346,7 @@ static irqreturn_t stm32_mdma_irq_handler(int irq, void *devid) ...@@ -1346,7 +1346,7 @@ static irqreturn_t stm32_mdma_irq_handler(int irq, void *devid)
{ {
struct stm32_mdma_device *dmadev = devid; struct stm32_mdma_device *dmadev = devid;
struct stm32_mdma_chan *chan = devid; struct stm32_mdma_chan *chan = devid;
u32 reg, id, ien, status, flag; u32 reg, id, ccr, ien, status;
/* Find out which channel generates the interrupt */ /* Find out which channel generates the interrupt */
status = readl_relaxed(dmadev->base + STM32_MDMA_GISR0); status = readl_relaxed(dmadev->base + STM32_MDMA_GISR0);
...@@ -1368,67 +1368,71 @@ static irqreturn_t stm32_mdma_irq_handler(int irq, void *devid) ...@@ -1368,67 +1368,71 @@ static irqreturn_t stm32_mdma_irq_handler(int irq, void *devid)
chan = &dmadev->chan[id]; chan = &dmadev->chan[id];
if (!chan) { if (!chan) {
dev_dbg(mdma2dev(dmadev), "MDMA channel not initialized\n"); dev_warn(mdma2dev(dmadev), "MDMA channel not initialized\n");
goto exit; return IRQ_NONE;
} }
/* Handle interrupt for the channel */ /* Handle interrupt for the channel */
spin_lock(&chan->vchan.lock); spin_lock(&chan->vchan.lock);
status = stm32_mdma_read(dmadev, STM32_MDMA_CISR(chan->id)); status = stm32_mdma_read(dmadev, STM32_MDMA_CISR(id));
ien = stm32_mdma_read(dmadev, STM32_MDMA_CCR(chan->id)); /* Mask Channel ReQuest Active bit which can be set in case of MEM2MEM */
ien &= STM32_MDMA_CCR_IRQ_MASK; status &= ~STM32_MDMA_CISR_CRQA;
ien >>= 1; ccr = stm32_mdma_read(dmadev, STM32_MDMA_CCR(id));
ien = (ccr & STM32_MDMA_CCR_IRQ_MASK) >> 1;
if (!(status & ien)) { if (!(status & ien)) {
spin_unlock(&chan->vchan.lock); spin_unlock(&chan->vchan.lock);
dev_dbg(chan2dev(chan), dev_warn(chan2dev(chan),
"spurious it (status=0x%04x, ien=0x%04x)\n", "spurious it (status=0x%04x, ien=0x%04x)\n",
status, ien); status, ien);
return IRQ_NONE; return IRQ_NONE;
} }
flag = __ffs(status & ien); reg = STM32_MDMA_CIFCR(id);
reg = STM32_MDMA_CIFCR(chan->id);
switch (1 << flag) { if (status & STM32_MDMA_CISR_TEIF) {
case STM32_MDMA_CISR_TEIF: dev_err(chan2dev(chan), "Transfer Err: stat=0x%08x\n",
id = chan->id; readl_relaxed(dmadev->base + STM32_MDMA_CESR(id)));
status = readl_relaxed(dmadev->base + STM32_MDMA_CESR(id));
dev_err(chan2dev(chan), "Transfer Err: stat=0x%08x\n", status);
stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CTEIF); stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CTEIF);
break; status &= ~STM32_MDMA_CISR_TEIF;
}
case STM32_MDMA_CISR_CTCIF: if (status & STM32_MDMA_CISR_CTCIF) {
stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CCTCIF); stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CCTCIF);
status &= ~STM32_MDMA_CISR_CTCIF;
stm32_mdma_xfer_end(chan); stm32_mdma_xfer_end(chan);
break; }
case STM32_MDMA_CISR_BRTIF: if (status & STM32_MDMA_CISR_BRTIF) {
stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CBRTIF); stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CBRTIF);
break; status &= ~STM32_MDMA_CISR_BRTIF;
}
case STM32_MDMA_CISR_BTIF: if (status & STM32_MDMA_CISR_BTIF) {
stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CBTIF); stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CBTIF);
status &= ~STM32_MDMA_CISR_BTIF;
chan->curr_hwdesc++; chan->curr_hwdesc++;
if (chan->desc && chan->desc->cyclic) { if (chan->desc && chan->desc->cyclic) {
if (chan->curr_hwdesc == chan->desc->count) if (chan->curr_hwdesc == chan->desc->count)
chan->curr_hwdesc = 0; chan->curr_hwdesc = 0;
vchan_cyclic_callback(&chan->desc->vdesc); vchan_cyclic_callback(&chan->desc->vdesc);
} }
break; }
case STM32_MDMA_CISR_TCIF: if (status & STM32_MDMA_CISR_TCIF) {
stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CLTCIF); stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CLTCIF);
break; status &= ~STM32_MDMA_CISR_TCIF;
}
default: if (status) {
dev_err(chan2dev(chan), "it %d unhandled (status=0x%04x)\n", stm32_mdma_set_bits(dmadev, reg, status);
1 << flag, status); dev_err(chan2dev(chan), "DMA error: status=0x%08x\n", status);
if (!(ccr & STM32_MDMA_CCR_EN))
dev_err(chan2dev(chan), "chan disabled by HW\n");
} }
spin_unlock(&chan->vchan.lock); spin_unlock(&chan->vchan.lock);
exit:
return IRQ_HANDLED; return IRQ_HANDLED;
} }
......
...@@ -1173,6 +1173,30 @@ static struct sun6i_dma_config sun50i_a64_dma_cfg = { ...@@ -1173,6 +1173,30 @@ static struct sun6i_dma_config sun50i_a64_dma_cfg = {
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES), BIT(DMA_SLAVE_BUSWIDTH_8_BYTES),
}; };
/*
* TODO: Add support for more than 4g physical addressing.
*
* The A100 binding uses the number of dma channels from the
* device tree node.
*/
static struct sun6i_dma_config sun50i_a100_dma_cfg = {
.clock_autogate_enable = sun6i_enable_clock_autogate_h3,
.set_burst_length = sun6i_set_burst_length_h3,
.set_drq = sun6i_set_drq_h6,
.set_mode = sun6i_set_mode_h6,
.src_burst_lengths = BIT(1) | BIT(4) | BIT(8) | BIT(16),
.dst_burst_lengths = BIT(1) | BIT(4) | BIT(8) | BIT(16),
.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) |
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES),
.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) |
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES),
.has_mbus_clk = true,
};
/* /*
* The H6 binding uses the number of dma channels from the * The H6 binding uses the number of dma channels from the
* device tree node. * device tree node.
...@@ -1225,6 +1249,7 @@ static const struct of_device_id sun6i_dma_match[] = { ...@@ -1225,6 +1249,7 @@ static const struct of_device_id sun6i_dma_match[] = {
{ .compatible = "allwinner,sun8i-h3-dma", .data = &sun8i_h3_dma_cfg }, { .compatible = "allwinner,sun8i-h3-dma", .data = &sun8i_h3_dma_cfg },
{ .compatible = "allwinner,sun8i-v3s-dma", .data = &sun8i_v3s_dma_cfg }, { .compatible = "allwinner,sun8i-v3s-dma", .data = &sun8i_v3s_dma_cfg },
{ .compatible = "allwinner,sun50i-a64-dma", .data = &sun50i_a64_dma_cfg }, { .compatible = "allwinner,sun50i-a64-dma", .data = &sun50i_a64_dma_cfg },
{ .compatible = "allwinner,sun50i-a100-dma", .data = &sun50i_a100_dma_cfg },
{ .compatible = "allwinner,sun50i-h6-dma", .data = &sun50i_h6_dma_cfg }, { .compatible = "allwinner,sun50i-h6-dma", .data = &sun50i_h6_dma_cfg },
{ /* sentinel */ } { /* sentinel */ }
}; };
......
...@@ -408,19 +408,18 @@ static irqreturn_t tegra_adma_isr(int irq, void *dev_id) ...@@ -408,19 +408,18 @@ static irqreturn_t tegra_adma_isr(int irq, void *dev_id)
{ {
struct tegra_adma_chan *tdc = dev_id; struct tegra_adma_chan *tdc = dev_id;
unsigned long status; unsigned long status;
unsigned long flags;
spin_lock_irqsave(&tdc->vc.lock, flags); spin_lock(&tdc->vc.lock);
status = tegra_adma_irq_clear(tdc); status = tegra_adma_irq_clear(tdc);
if (status == 0 || !tdc->desc) { if (status == 0 || !tdc->desc) {
spin_unlock_irqrestore(&tdc->vc.lock, flags); spin_unlock(&tdc->vc.lock);
return IRQ_NONE; return IRQ_NONE;
} }
vchan_cyclic_callback(&tdc->desc->vd); vchan_cyclic_callback(&tdc->desc->vd);
spin_unlock_irqrestore(&tdc->vc.lock, flags); spin_unlock(&tdc->vc.lock);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
......
...@@ -7,5 +7,6 @@ obj-$(CONFIG_TI_K3_UDMA_GLUE_LAYER) += k3-udma-glue.o ...@@ -7,5 +7,6 @@ obj-$(CONFIG_TI_K3_UDMA_GLUE_LAYER) += k3-udma-glue.o
obj-$(CONFIG_TI_K3_PSIL) += k3-psil.o \ obj-$(CONFIG_TI_K3_PSIL) += k3-psil.o \
k3-psil-am654.o \ k3-psil-am654.o \
k3-psil-j721e.o \ k3-psil-j721e.o \
k3-psil-j7200.o k3-psil-j7200.o \
k3-psil-am64.o
obj-$(CONFIG_TI_DMA_CROSSBAR) += dma-crossbar.o obj-$(CONFIG_TI_DMA_CROSSBAR) += dma-crossbar.o
...@@ -122,7 +122,7 @@ static void *ti_am335x_xbar_route_allocate(struct of_phandle_args *dma_spec, ...@@ -122,7 +122,7 @@ static void *ti_am335x_xbar_route_allocate(struct of_phandle_args *dma_spec,
return map; return map;
} }
static const struct of_device_id ti_am335x_master_match[] = { static const struct of_device_id ti_am335x_master_match[] __maybe_unused = {
{ .compatible = "ti,edma3-tpcc", }, { .compatible = "ti,edma3-tpcc", },
{}, {},
}; };
...@@ -292,7 +292,7 @@ static const u32 ti_dma_offset[] = { ...@@ -292,7 +292,7 @@ static const u32 ti_dma_offset[] = {
[TI_XBAR_SDMA_OFFSET] = 1, [TI_XBAR_SDMA_OFFSET] = 1,
}; };
static const struct of_device_id ti_dra7_master_match[] = { static const struct of_device_id ti_dra7_master_match[] __maybe_unused = {
{ {
.compatible = "ti,omap4430-sdma", .compatible = "ti,omap4430-sdma",
.data = &ti_dma_offset[TI_XBAR_SDMA_OFFSET], .data = &ti_dma_offset[TI_XBAR_SDMA_OFFSET],
...@@ -460,7 +460,7 @@ static int ti_dma_xbar_probe(struct platform_device *pdev) ...@@ -460,7 +460,7 @@ static int ti_dma_xbar_probe(struct platform_device *pdev)
static struct platform_driver ti_dma_xbar_driver = { static struct platform_driver ti_dma_xbar_driver = {
.driver = { .driver = {
.name = "ti-dma-crossbar", .name = "ti-dma-crossbar",
.of_match_table = of_match_ptr(ti_dma_xbar_match), .of_match_table = ti_dma_xbar_match,
}, },
.probe = ti_dma_xbar_probe, .probe = ti_dma_xbar_probe,
}; };
......
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2020 Texas Instruments Incorporated - https://www.ti.com
* Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
*/
#include <linux/kernel.h>
#include "k3-psil-priv.h"
#define PSIL_PDMA_XY_TR(x) \
{ \
.thread_id = x, \
.ep_config = { \
.ep_type = PSIL_EP_PDMA_XY, \
.mapped_channel_id = -1, \
.default_flow_id = -1, \
}, \
}
#define PSIL_PDMA_XY_PKT(x) \
{ \
.thread_id = x, \
.ep_config = { \
.ep_type = PSIL_EP_PDMA_XY, \
.mapped_channel_id = -1, \
.default_flow_id = -1, \
.pkt_mode = 1, \
}, \
}
#define PSIL_ETHERNET(x, ch, flow_base, flow_cnt) \
{ \
.thread_id = x, \
.ep_config = { \
.ep_type = PSIL_EP_NATIVE, \
.pkt_mode = 1, \
.needs_epib = 1, \
.psd_size = 16, \
.mapped_channel_id = ch, \
.flow_start = flow_base, \
.flow_num = flow_cnt, \
.default_flow_id = flow_base, \
}, \
}
#define PSIL_SAUL(x, ch, flow_base, flow_cnt, default_flow, tx) \
{ \
.thread_id = x, \
.ep_config = { \
.ep_type = PSIL_EP_NATIVE, \
.pkt_mode = 1, \
.needs_epib = 1, \
.psd_size = 64, \
.mapped_channel_id = ch, \
.flow_start = flow_base, \
.flow_num = flow_cnt, \
.default_flow_id = default_flow, \
.notdpkt = tx, \
}, \
}
/* PSI-L source thread IDs, used for RX (DMA_DEV_TO_MEM) */
static struct psil_ep am64_src_ep_map[] = {
/* SAUL */
PSIL_SAUL(0x4000, 17, 32, 8, 32, 0),
PSIL_SAUL(0x4001, 18, 32, 8, 33, 0),
PSIL_SAUL(0x4002, 19, 40, 8, 40, 0),
PSIL_SAUL(0x4003, 20, 40, 8, 41, 0),
/* ICSS_G0 */
PSIL_ETHERNET(0x4100, 21, 48, 16),
PSIL_ETHERNET(0x4101, 22, 64, 16),
PSIL_ETHERNET(0x4102, 23, 80, 16),
PSIL_ETHERNET(0x4103, 24, 96, 16),
/* ICSS_G1 */
PSIL_ETHERNET(0x4200, 25, 112, 16),
PSIL_ETHERNET(0x4201, 26, 128, 16),
PSIL_ETHERNET(0x4202, 27, 144, 16),
PSIL_ETHERNET(0x4203, 28, 160, 16),
/* PDMA_MAIN0 - SPI0-3 */
PSIL_PDMA_XY_PKT(0x4300),
PSIL_PDMA_XY_PKT(0x4301),
PSIL_PDMA_XY_PKT(0x4302),
PSIL_PDMA_XY_PKT(0x4303),
PSIL_PDMA_XY_PKT(0x4304),
PSIL_PDMA_XY_PKT(0x4305),
PSIL_PDMA_XY_PKT(0x4306),
PSIL_PDMA_XY_PKT(0x4307),
PSIL_PDMA_XY_PKT(0x4308),
PSIL_PDMA_XY_PKT(0x4309),
PSIL_PDMA_XY_PKT(0x430a),
PSIL_PDMA_XY_PKT(0x430b),
PSIL_PDMA_XY_PKT(0x430c),
PSIL_PDMA_XY_PKT(0x430d),
PSIL_PDMA_XY_PKT(0x430e),
PSIL_PDMA_XY_PKT(0x430f),
/* PDMA_MAIN0 - USART0-1 */
PSIL_PDMA_XY_PKT(0x4310),
PSIL_PDMA_XY_PKT(0x4311),
/* PDMA_MAIN1 - SPI4 */
PSIL_PDMA_XY_PKT(0x4400),
PSIL_PDMA_XY_PKT(0x4401),
PSIL_PDMA_XY_PKT(0x4402),
PSIL_PDMA_XY_PKT(0x4403),
/* PDMA_MAIN1 - USART2-6 */
PSIL_PDMA_XY_PKT(0x4404),
PSIL_PDMA_XY_PKT(0x4405),
PSIL_PDMA_XY_PKT(0x4406),
PSIL_PDMA_XY_PKT(0x4407),
PSIL_PDMA_XY_PKT(0x4408),
/* PDMA_MAIN1 - ADCs */
PSIL_PDMA_XY_TR(0x440f),
PSIL_PDMA_XY_TR(0x4410),
/* CPSW2 */
PSIL_ETHERNET(0x4500, 16, 16, 16),
};
/* PSI-L destination thread IDs, used for TX (DMA_MEM_TO_DEV) */
static struct psil_ep am64_dst_ep_map[] = {
/* SAUL */
PSIL_SAUL(0xc000, 24, 80, 8, 80, 1),
PSIL_SAUL(0xc001, 25, 88, 8, 88, 1),
/* ICSS_G0 */
PSIL_ETHERNET(0xc100, 26, 96, 1),
PSIL_ETHERNET(0xc101, 27, 97, 1),
PSIL_ETHERNET(0xc102, 28, 98, 1),
PSIL_ETHERNET(0xc103, 29, 99, 1),
PSIL_ETHERNET(0xc104, 30, 100, 1),
PSIL_ETHERNET(0xc105, 31, 101, 1),
PSIL_ETHERNET(0xc106, 32, 102, 1),
PSIL_ETHERNET(0xc107, 33, 103, 1),
/* ICSS_G1 */
PSIL_ETHERNET(0xc200, 34, 104, 1),
PSIL_ETHERNET(0xc201, 35, 105, 1),
PSIL_ETHERNET(0xc202, 36, 106, 1),
PSIL_ETHERNET(0xc203, 37, 107, 1),
PSIL_ETHERNET(0xc204, 38, 108, 1),
PSIL_ETHERNET(0xc205, 39, 109, 1),
PSIL_ETHERNET(0xc206, 40, 110, 1),
PSIL_ETHERNET(0xc207, 41, 111, 1),
/* CPSW2 */
PSIL_ETHERNET(0xc500, 16, 16, 8),
PSIL_ETHERNET(0xc501, 17, 24, 8),
PSIL_ETHERNET(0xc502, 18, 32, 8),
PSIL_ETHERNET(0xc503, 19, 40, 8),
PSIL_ETHERNET(0xc504, 20, 48, 8),
PSIL_ETHERNET(0xc505, 21, 56, 8),
PSIL_ETHERNET(0xc506, 22, 64, 8),
PSIL_ETHERNET(0xc507, 23, 72, 8),
};
struct psil_ep_map am64_ep_map = {
.name = "am64",
.src = am64_src_ep_map,
.src_count = ARRAY_SIZE(am64_src_ep_map),
.dst = am64_dst_ep_map,
.dst_count = ARRAY_SIZE(am64_dst_ep_map),
};
...@@ -40,5 +40,6 @@ struct psil_endpoint_config *psil_get_ep_config(u32 thread_id); ...@@ -40,5 +40,6 @@ struct psil_endpoint_config *psil_get_ep_config(u32 thread_id);
extern struct psil_ep_map am654_ep_map; extern struct psil_ep_map am654_ep_map;
extern struct psil_ep_map j721e_ep_map; extern struct psil_ep_map j721e_ep_map;
extern struct psil_ep_map j7200_ep_map; extern struct psil_ep_map j7200_ep_map;
extern struct psil_ep_map am64_ep_map;
#endif /* K3_PSIL_PRIV_H_ */ #endif /* K3_PSIL_PRIV_H_ */
...@@ -20,6 +20,7 @@ static const struct soc_device_attribute k3_soc_devices[] = { ...@@ -20,6 +20,7 @@ static const struct soc_device_attribute k3_soc_devices[] = {
{ .family = "AM65X", .data = &am654_ep_map }, { .family = "AM65X", .data = &am654_ep_map },
{ .family = "J721E", .data = &j721e_ep_map }, { .family = "J721E", .data = &j721e_ep_map },
{ .family = "J7200", .data = &j7200_ep_map }, { .family = "J7200", .data = &j7200_ep_map },
{ .family = "AM64X", .data = &am64_ep_map },
{ /* sentinel */ } { /* sentinel */ }
}; };
......
此差异已折叠。
...@@ -50,6 +50,18 @@ struct udma_dev *of_xudma_dev_get(struct device_node *np, const char *property) ...@@ -50,6 +50,18 @@ struct udma_dev *of_xudma_dev_get(struct device_node *np, const char *property)
} }
EXPORT_SYMBOL(of_xudma_dev_get); EXPORT_SYMBOL(of_xudma_dev_get);
struct device *xudma_get_device(struct udma_dev *ud)
{
return ud->dev;
}
EXPORT_SYMBOL(xudma_get_device);
struct k3_ringacc *xudma_get_ringacc(struct udma_dev *ud)
{
return ud->ringacc;
}
EXPORT_SYMBOL(xudma_get_ringacc);
u32 xudma_dev_get_psil_base(struct udma_dev *ud) u32 xudma_dev_get_psil_base(struct udma_dev *ud)
{ {
return ud->psil_base; return ud->psil_base;
...@@ -76,6 +88,9 @@ EXPORT_SYMBOL(xudma_free_gp_rflow_range); ...@@ -76,6 +88,9 @@ EXPORT_SYMBOL(xudma_free_gp_rflow_range);
bool xudma_rflow_is_gp(struct udma_dev *ud, int id) bool xudma_rflow_is_gp(struct udma_dev *ud, int id)
{ {
if (!ud->rflow_gp_map)
return false;
return !test_bit(id, ud->rflow_gp_map); return !test_bit(id, ud->rflow_gp_map);
} }
EXPORT_SYMBOL(xudma_rflow_is_gp); EXPORT_SYMBOL(xudma_rflow_is_gp);
...@@ -107,6 +122,12 @@ void xudma_rflow_put(struct udma_dev *ud, struct udma_rflow *p) ...@@ -107,6 +122,12 @@ void xudma_rflow_put(struct udma_dev *ud, struct udma_rflow *p)
} }
EXPORT_SYMBOL(xudma_rflow_put); EXPORT_SYMBOL(xudma_rflow_put);
int xudma_get_rflow_ring_offset(struct udma_dev *ud)
{
return ud->tflow_cnt;
}
EXPORT_SYMBOL(xudma_get_rflow_ring_offset);
#define XUDMA_GET_RESOURCE_ID(res) \ #define XUDMA_GET_RESOURCE_ID(res) \
int xudma_##res##_get_id(struct udma_##res *p) \ int xudma_##res##_get_id(struct udma_##res *p) \
{ \ { \
...@@ -136,3 +157,27 @@ void xudma_##res##rt_write(struct udma_##res *p, int reg, u32 val) \ ...@@ -136,3 +157,27 @@ void xudma_##res##rt_write(struct udma_##res *p, int reg, u32 val) \
EXPORT_SYMBOL(xudma_##res##rt_write) EXPORT_SYMBOL(xudma_##res##rt_write)
XUDMA_RT_IO_FUNCTIONS(tchan); XUDMA_RT_IO_FUNCTIONS(tchan);
XUDMA_RT_IO_FUNCTIONS(rchan); XUDMA_RT_IO_FUNCTIONS(rchan);
int xudma_is_pktdma(struct udma_dev *ud)
{
return ud->match_data->type == DMA_TYPE_PKTDMA;
}
EXPORT_SYMBOL(xudma_is_pktdma);
int xudma_pktdma_tflow_get_irq(struct udma_dev *ud, int udma_tflow_id)
{
const struct udma_oes_offsets *oes = &ud->soc_data->oes;
return ti_sci_inta_msi_get_virq(ud->dev, udma_tflow_id +
oes->pktdma_tchan_flow);
}
EXPORT_SYMBOL(xudma_pktdma_tflow_get_irq);
int xudma_pktdma_rflow_get_irq(struct udma_dev *ud, int udma_rflow_id)
{
const struct udma_oes_offsets *oes = &ud->soc_data->oes;
return ti_sci_inta_msi_get_virq(ud->dev, udma_rflow_id +
oes->pktdma_rchan_flow);
}
EXPORT_SYMBOL(xudma_pktdma_rflow_get_irq);
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
/* Copyright (c) 2020, Linaro Ltd. */
#ifndef __DT_BINDINGS_DMA_QCOM_GPI_H__
#define __DT_BINDINGS_DMA_QCOM_GPI_H__
#define QCOM_GPI_SPI 1
#define QCOM_GPI_UART 2
#define QCOM_GPI_I2C 3
#endif /* __DT_BINDINGS_DMA_QCOM_GPI_H__ */
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册