提交 1bc5e157 编写于 作者: L Linus Torvalds

Merge tag 'dmaengine-4.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This time we have support for few new devices, few new features and
  odd fixes spread thru the subsystem.

  New devices added:
   - support for CSRatlas7 dma controller
   - Allwinner H3(sun8i) controller
   - TI DMA crossbar driver on DRA7x
   - new pxa driver

  New features added:
   - memset support is bought back now that we have a user in xdmac controller
   - interleaved transfers support different source and destination strides
   - supporting DMA routers and configuration thru DT
   - support for reusing descriptors
   - xdmac memset and interleaved transfer support
   - hdmac support for interleaved transfers
   - omap-dma support for memcpy

  Others:
   - Constify platform_device_id
   - mv_xor fixes and improvements"

* tag 'dmaengine-4.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (46 commits)
  dmaengine: xgene: fix file permission
  dmaengine: fsl-edma: clear pending interrupts on initialization
  dmaengine: xdmac: Add memset support
  Documentation: dmaengine: document DMA_CTRL_ACK
  dmaengine: virt-dma: don't always free descriptor upon completion
  dmaengine: Revert "drivers/dma: remove unused support for MEMSET operations"
  dmaengine: hdmac: Implement interleaved transfers
  dmaengine: Move icg helpers to global header
  dmaengine: mv_xor: improve descriptors list handling and reduce locking
  dmaengine: mv_xor: Enlarge descriptor pool size
  dmaengine: mv_xor: add support for a38x command in descriptor mode
  dmaengine: mv_xor: Rename function for consistent naming
  dmaengine: mv_xor: bug fix for racing condition in descriptors cleanup
  dmaengine: pl330: fix wording in mcbufsz message
  dmaengine: sirf: add CSRatlas7 SoC support
  dmaengine: xgene-dma: Fix "incorrect type in assignement" warnings
  dmaengine: fix kernel-doc documentation
  dmaengine: pxa_dma: add support for legacy transition
  dmaengine: pxa_dma: add debug information
  dmaengine: pxa: add pxa dmaengine driver
  ...
......@@ -31,6 +31,34 @@ Example:
dma-requests = <127>;
};
* DMA router
DMA routers are transparent IP blocks used to route DMA request lines from
devices to the DMA controller. Some SoCs (like TI DRA7x) have more peripherals
integrated with DMA requests than what the DMA controller can handle directly.
Required property:
- dma-masters: phandle of the DMA controller or list of phandles for
the DMA controllers the router can direct the signal to.
- #dma-cells: Must be at least 1. Used to provide DMA router specific
information. See DMA client binding below for more
details.
Optional properties:
- dma-requests: Number of incoming request lines the router can handle.
- In the node pointed by the dma-masters:
- dma-requests: The router driver might need to look for this in order
to configure the routing.
Example:
sdma_xbar: dma-router@4a002b78 {
compatible = "ti,dra7-dma-crossbar";
reg = <0x4a002b78 0xfc>;
#dma-cells = <1>;
dma-requests = <205>;
ti,dma-safe-map = <0>;
dma-masters = <&sdma>;
};
* DMA client
......
* Marvell XOR engines
Required properties:
- compatible: Should be "marvell,orion-xor"
- compatible: Should be "marvell,orion-xor" or "marvell,armada-380-xor"
- reg: Should contain registers location and length (two sets)
the first set is the low registers, the second set the high
registers for the XOR engine.
......
......@@ -3,7 +3,8 @@
See dma.txt first
Required properties:
- compatible: Should be "sirf,prima2-dmac" or "sirf,marco-dmac"
- compatible: Should be "sirf,prima2-dmac", "sirf,atlas7-dmac" or
"sirf,atlas7-dmac-v2"
- reg: Should contain DMA registers location and length.
- interrupts: Should contain one interrupt shared by all channel
- #dma-cells: must be <1>. used to represent the number of integer
......
......@@ -4,7 +4,10 @@ This driver follows the generic DMA bindings defined in dma.txt.
Required properties:
- compatible: Must be "allwinner,sun6i-a31-dma" or "allwinner,sun8i-a23-dma"
- compatible: Must be one of
"allwinner,sun6i-a31-dma"
"allwinner,sun8i-a23-dma"
"allwinner,sun8i-h3-dma"
- reg: Should contain the registers base address and length
- interrupts: Should contain a reference to the interrupt used by this device
- clocks: Should contain a reference to the parent AHB clock
......
Texas Instruments DMA Crossbar (DMA request router)
Required properties:
- compatible: "ti,dra7-dma-crossbar" for DRA7xx DMA crossbar
- reg: Memory map for accessing module
- #dma-cells: Should be set to <1>.
Clients should use the crossbar request number (input)
- dma-requests: Number of DMA requests the crossbar can receive
- dma-masters: phandle pointing to the DMA controller
The DMA controller node need to have the following poroperties:
- dma-requests: Number of DMA requests the controller can handle
Optional properties:
- ti,dma-safe-map: Safe routing value for unused request lines
Example:
/* DMA controller */
sdma: dma-controller@4a056000 {
compatible = "ti,omap4430-sdma";
reg = <0x4a056000 0x1000>;
interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
#dma-cells = <1>;
dma-channels = <32>;
dma-requests = <127>;
};
/* DMA crossbar */
sdma_xbar: dma-router@4a002b78 {
compatible = "ti,dra7-dma-crossbar";
reg = <0x4a002b78 0xfc>;
#dma-cells = <1>;
dma-requests = <205>;
ti,dma-safe-map = <0>;
dma-masters = <&sdma>;
};
/* DMA client */
uart1: serial@4806a000 {
compatible = "ti,omap4-uart";
reg = <0x4806a000 0x100>;
interrupts-extended = <&gic GIC_SPI 67 IRQ_TYPE_LEVEL_HIGH>;
ti,hwmods = "uart1";
clock-frequency = <48000000>;
status = "disabled";
dmas = <&sdma_xbar 49>, <&sdma_xbar 50>;
dma-names = "tx", "rx";
};
......@@ -345,11 +345,12 @@ where to put them)
that abstracts it away.
* DMA_CTRL_ACK
- Undocumented feature
- No one really has an idea of what it's about, besides being
related to reusing the DMA transaction descriptors or having
additional transactions added to it in the async-tx API
- Useless in the case of the slave API
- If set, the transfer can be reused after being completed.
- There is a guarantee the transfer won't be freed until it is acked
by async_tx_ack().
- As a consequence, if a device driver wants to skip the dma_map_sg() and
dma_unmap_sg() in between 2 transfers, because the DMA'd data wasn't used,
it can resubmit the transfer right after its completion.
General Design Notes
--------------------
......
PXA/MMP - DMA Slave controller
==============================
Constraints
-----------
a) Transfers hot queuing
A driver submitting a transfer and issuing it should be granted the transfer
is queued even on a running DMA channel.
This implies that the queuing doesn't wait for the previous transfer end,
and that the descriptor chaining is not only done in the irq/tasklet code
triggered by the end of the transfer.
A transfer which is submitted and issued on a phy doesn't wait for a phy to
stop and restart, but is submitted on a "running channel". The other
drivers, especially mmp_pdma waited for the phy to stop before relaunching
a new transfer.
b) All transfers having asked for confirmation should be signaled
Any issued transfer with DMA_PREP_INTERRUPT should trigger a callback call.
This implies that even if an irq/tasklet is triggered by end of tx1, but
at the time of irq/dma tx2 is already finished, tx1->complete() and
tx2->complete() should be called.
c) Channel running state
A driver should be able to query if a channel is running or not. For the
multimedia case, such as video capture, if a transfer is submitted and then
a check of the DMA channel reports a "stopped channel", the transfer should
not be issued until the next "start of frame interrupt", hence the need to
know if a channel is in running or stopped state.
d) Bandwidth guarantee
The PXA architecture has 4 levels of DMAs priorities : high, normal, low.
The high prorities get twice as much bandwidth as the normal, which get twice
as much as the low priorities.
A driver should be able to request a priority, especially the real-time
ones such as pxa_camera with (big) throughputs.
Design
------
a) Virtual channels
Same concept as in sa11x0 driver, ie. a driver was assigned a "virtual
channel" linked to the requestor line, and the physical DMA channel is
assigned on the fly when the transfer is issued.
b) Transfer anatomy for a scatter-gather transfer
+------------+-----+---------------+----------------+-----------------+
| desc-sg[0] | ... | desc-sg[last] | status updater | finisher/linker |
+------------+-----+---------------+----------------+-----------------+
This structure is pointed by dma->sg_cpu.
The descriptors are used as follows :
- desc-sg[i]: i-th descriptor, transferring the i-th sg
element to the video buffer scatter gather
- status updater
Transfers a single u32 to a well known dma coherent memory to leave
a trace that this transfer is done. The "well known" is unique per
physical channel, meaning that a read of this value will tell which
is the last finished transfer at that point in time.
- finisher: has ddadr=DADDR_STOP, dcmd=ENDIRQEN
- linker: has ddadr= desc-sg[0] of next transfer, dcmd=0
c) Transfers hot-chaining
Suppose the running chain is :
Buffer 1 Buffer 2
+---------+----+---+ +----+----+----+---+
| d0 | .. | dN | l | | d0 | .. | dN | f |
+---------+----+-|-+ ^----+----+----+---+
| |
+----+
After a call to dmaengine_submit(b3), the chain will look like :
Buffer 1 Buffer 2 Buffer 3
+---------+----+---+ +----+----+----+---+ +----+----+----+---+
| d0 | .. | dN | l | | d0 | .. | dN | l | | d0 | .. | dN | f |
+---------+----+-|-+ ^----+----+----+-|-+ ^----+----+----+---+
| | | |
+----+ +----+
new_link
If while new_link was created the DMA channel stopped, it is _not_
restarted. Hot-chaining doesn't break the assumption that
dma_async_issue_pending() is to be used to ensure the transfer is actually started.
One exception to this rule :
- if Buffer1 and Buffer2 had all their addresses 8 bytes aligned
- and if Buffer3 has at least one address not 4 bytes aligned
- then hot-chaining cannot happen, as the channel must be stopped, the
"align bit" must be set, and the channel restarted As a consequence,
such a transfer tx_submit() will be queued on the submitted queue, and
this specific case if the DMA is already running in aligned mode.
d) Transfers completion updater
Each time a transfer is completed on a channel, an interrupt might be
generated or not, up to the client's request. But in each case, the last
descriptor of a transfer, the "status updater", will write the latest
transfer being completed into the physical channel's completion mark.
This will speed up residue calculation, for large transfers such as video
buffers which hold around 6k descriptors or more. This also allows without
any lock to find out what is the latest completed transfer in a running
DMA chain.
e) Transfers completion, irq and tasklet
When a transfer flagged as "DMA_PREP_INTERRUPT" is finished, the dma irq
is raised. Upon this interrupt, a tasklet is scheduled for the physical
channel.
The tasklet is responsible for :
- reading the physical channel last updater mark
- calling all the transfer callbacks of finished transfers, based on
that mark, and each transfer flags.
If a transfer is completed while this handling is done, a dma irq will
be raised, and the tasklet will be scheduled once again, having a new
updater mark.
f) Residue
Residue granularity will be descriptor based. The issued but not completed
transfers will be scanned for all of their descriptors against the
currently running descriptor.
g) Most complicated case of driver's tx queues
The most tricky situation is when :
- there are not "acked" transfers (tx0)
- a driver submitted an aligned tx1, not chained
- a driver submitted an aligned tx2 => tx2 is cold chained to tx1
- a driver issued tx1+tx2 => channel is running in aligned mode
- a driver submitted an aligned tx3 => tx3 is hot-chained
- a driver submitted an unaligned tx4 => tx4 is put in submitted queue,
not chained
- a driver issued tx4 => tx4 is put in issued queue, not chained
- a driver submitted an aligned tx5 => tx5 is put in submitted queue, not
chained
- a driver submitted an aligned tx6 => tx6 is put in submitted queue,
cold chained to tx5
This translates into (after tx4 is issued) :
- issued queue
+-----+ +-----+ +-----+ +-----+
| tx1 | | tx2 | | tx3 | | tx4 |
+---|-+ ^---|-+ ^-----+ +-----+
| | | |
+---+ +---+
- submitted queue
+-----+ +-----+
| tx5 | | tx6 |
+---|-+ ^-----+
| |
+---+
- completed queue : empty
- allocated queue : tx0
It should be noted that after tx3 is completed, the channel is stopped, and
restarted in "unaligned mode" to handle tx4.
Author: Robert Jarzmik <robert.jarzmik@free.fr>
......@@ -8174,6 +8174,7 @@ T: git git://github.com/hzhuang1/linux.git
T: git git://github.com/rjarzmik/linux.git
S: Maintained
F: arch/arm/mach-pxa/
F: drivers/dma/pxa*
F: drivers/pcmcia/pxa2xx*
F: drivers/spi/spi-pxa2xx*
F: drivers/usb/gadget/udc/pxa2*
......
......@@ -162,6 +162,17 @@ config MX3_IPU_IRQS
To avoid bloating the irq_desc[] array we allocate a sufficient
number of IRQ slots and map them dynamically to specific sources.
config PXA_DMA
bool "PXA DMA support"
depends on (ARCH_MMP || ARCH_PXA)
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Support the DMA engine for PXA. It is also compatible with MMP PDMA
platform. The internal DMA IP of all PXA variants is supported, with
16 to 32 channels for peripheral to memory or memory to memory
transfers.
config TXX9_DMAC
tristate "Toshiba TXx9 SoC DMA support"
depends on MACH_TX49XX || MACH_TX39XX
......@@ -245,6 +256,9 @@ config TI_EDMA
Enable support for the TI EDMA controller. This DMA
engine is found on TI DaVinci and AM33xx parts.
config TI_DMA_CROSSBAR
bool
config ARCH_HAS_ASYNC_TX_FIND_CHANNEL
bool
......@@ -330,6 +344,7 @@ config DMA_OMAP
depends on ARCH_OMAP
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
select TI_DMA_CROSSBAR if SOC_DRA7XX
config DMA_BCM2835
tristate "BCM2835 DMA engine support"
......
......@@ -25,6 +25,7 @@ obj-$(CONFIG_AMCC_PPC440SPE_ADMA) += ppc4xx/
obj-$(CONFIG_IMX_SDMA) += imx-sdma.o
obj-$(CONFIG_IMX_DMA) += imx-dma.o
obj-$(CONFIG_MXS_DMA) += mxs-dma.o
obj-$(CONFIG_PXA_DMA) += pxa_dma.o
obj-$(CONFIG_TIMB_DMA) += timb_dma.o
obj-$(CONFIG_SIRF_DMA) += sirf-dma.o
obj-$(CONFIG_TI_EDMA) += edma.o
......@@ -38,6 +39,7 @@ obj-$(CONFIG_EP93XX_DMA) += ep93xx_dma.o
obj-$(CONFIG_DMA_SA11X0) += sa11x0-dma.o
obj-$(CONFIG_MMP_TDMA) += mmp_tdma.o
obj-$(CONFIG_DMA_OMAP) += omap-dma.o
obj-$(CONFIG_TI_DMA_CROSSBAR) += ti-dma-crossbar.o
obj-$(CONFIG_DMA_BCM2835) += bcm2835-dma.o
obj-$(CONFIG_MMP_PDMA) += mmp_pdma.o
obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o
......
......@@ -474,7 +474,7 @@ static void pl08x_terminate_phy_chan(struct pl08x_driver_data *pl08x,
u32 val = readl(ch->reg_config);
val &= ~(PL080_CONFIG_ENABLE | PL080_CONFIG_ERR_IRQ_MASK |
PL080_CONFIG_TC_IRQ_MASK);
PL080_CONFIG_TC_IRQ_MASK);
writel(val, ch->reg_config);
......
......@@ -247,6 +247,10 @@ static void atc_dostart(struct at_dma_chan *atchan, struct at_desc *first)
channel_writel(atchan, CTRLA, 0);
channel_writel(atchan, CTRLB, 0);
channel_writel(atchan, DSCR, first->txd.phys);
channel_writel(atchan, SPIP, ATC_SPIP_HOLE(first->src_hole) |
ATC_SPIP_BOUNDARY(first->boundary));
channel_writel(atchan, DPIP, ATC_DPIP_HOLE(first->dst_hole) |
ATC_DPIP_BOUNDARY(first->boundary));
dma_writel(atdma, CHER, atchan->mask);
vdbg_dump_regs(atchan);
......@@ -634,6 +638,104 @@ static dma_cookie_t atc_tx_submit(struct dma_async_tx_descriptor *tx)
return cookie;
}
/**
* atc_prep_dma_interleaved - prepare memory to memory interleaved operation
* @chan: the channel to prepare operation on
* @xt: Interleaved transfer template
* @flags: tx descriptor status flags
*/
static struct dma_async_tx_descriptor *
atc_prep_dma_interleaved(struct dma_chan *chan,
struct dma_interleaved_template *xt,
unsigned long flags)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
struct data_chunk *first = xt->sgl;
struct at_desc *desc = NULL;
size_t xfer_count;
unsigned int dwidth;
u32 ctrla;
u32 ctrlb;
size_t len = 0;
int i;
dev_info(chan2dev(chan),
"%s: src=0x%08x, dest=0x%08x, numf=%d, frame_size=%d, flags=0x%lx\n",
__func__, xt->src_start, xt->dst_start, xt->numf,
xt->frame_size, flags);
if (unlikely(!xt || xt->numf != 1 || !xt->frame_size))
return NULL;
/*
* The controller can only "skip" X bytes every Y bytes, so we
* need to make sure we are given a template that fit that
* description, ie a template with chunks that always have the
* same size, with the same ICGs.
*/
for (i = 0; i < xt->frame_size; i++) {
struct data_chunk *chunk = xt->sgl + i;
if ((chunk->size != xt->sgl->size) ||
(dmaengine_get_dst_icg(xt, chunk) != dmaengine_get_dst_icg(xt, first)) ||
(dmaengine_get_src_icg(xt, chunk) != dmaengine_get_src_icg(xt, first))) {
dev_err(chan2dev(chan),
"%s: the controller can transfer only identical chunks\n",
__func__);
return NULL;
}
len += chunk->size;
}
dwidth = atc_get_xfer_width(xt->src_start,
xt->dst_start, len);
xfer_count = len >> dwidth;
if (xfer_count > ATC_BTSIZE_MAX) {
dev_err(chan2dev(chan), "%s: buffer is too big\n", __func__);
return NULL;
}
ctrla = ATC_SRC_WIDTH(dwidth) |
ATC_DST_WIDTH(dwidth);
ctrlb = ATC_DEFAULT_CTRLB | ATC_IEN
| ATC_SRC_ADDR_MODE_INCR
| ATC_DST_ADDR_MODE_INCR
| ATC_SRC_PIP
| ATC_DST_PIP
| ATC_FC_MEM2MEM;
/* create the transfer */
desc = atc_desc_get(atchan);
if (!desc) {
dev_err(chan2dev(chan),
"%s: couldn't allocate our descriptor\n", __func__);
return NULL;
}
desc->lli.saddr = xt->src_start;
desc->lli.daddr = xt->dst_start;
desc->lli.ctrla = ctrla | xfer_count;
desc->lli.ctrlb = ctrlb;
desc->boundary = first->size >> dwidth;
desc->dst_hole = (dmaengine_get_dst_icg(xt, first) >> dwidth) + 1;
desc->src_hole = (dmaengine_get_src_icg(xt, first) >> dwidth) + 1;
desc->txd.cookie = -EBUSY;
desc->total_len = desc->len = len;
desc->tx_width = dwidth;
/* set end-of-link to the last link descriptor of list*/
set_desc_eol(desc);
desc->txd.flags = flags; /* client is in control of this ack */
return &desc->txd;
}
/**
* atc_prep_dma_memcpy - prepare a memcpy operation
* @chan: the channel to prepare operation on
......@@ -1609,6 +1711,7 @@ static int __init at_dma_probe(struct platform_device *pdev)
/* setup platform data for each SoC */
dma_cap_set(DMA_MEMCPY, at91sam9rl_config.cap_mask);
dma_cap_set(DMA_SG, at91sam9rl_config.cap_mask);
dma_cap_set(DMA_INTERLEAVE, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_MEMCPY, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_SLAVE, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_SG, at91sam9g45_config.cap_mask);
......@@ -1713,6 +1816,9 @@ static int __init at_dma_probe(struct platform_device *pdev)
atdma->dma_common.dev = &pdev->dev;
/* set prep routines based on capability */
if (dma_has_cap(DMA_INTERLEAVE, atdma->dma_common.cap_mask))
atdma->dma_common.device_prep_interleaved_dma = atc_prep_dma_interleaved;
if (dma_has_cap(DMA_MEMCPY, atdma->dma_common.cap_mask))
atdma->dma_common.device_prep_dma_memcpy = atc_prep_dma_memcpy;
......
......@@ -196,6 +196,11 @@ struct at_desc {
size_t len;
u32 tx_width;
size_t total_len;
/* Interleaved data */
size_t boundary;
size_t dst_hole;
size_t src_hole;
};
static inline struct at_desc *
......
......@@ -235,6 +235,10 @@ struct at_xdmac_lld {
dma_addr_t mbr_sa; /* Source Address Member */
dma_addr_t mbr_da; /* Destination Address Member */
u32 mbr_cfg; /* Configuration Register */
u32 mbr_bc; /* Block Control Register */
u32 mbr_ds; /* Data Stride Register */
u32 mbr_sus; /* Source Microblock Stride Register */
u32 mbr_dus; /* Destination Microblock Stride Register */
};
......@@ -358,6 +362,8 @@ static void at_xdmac_start_xfer(struct at_xdmac_chan *atchan,
if (at_xdmac_chan_is_cyclic(atchan)) {
reg = AT_XDMAC_CNDC_NDVIEW_NDV1;
at_xdmac_chan_write(atchan, AT_XDMAC_CC, first->lld.mbr_cfg);
} else if (first->lld.mbr_ubc & AT_XDMAC_MBR_UBC_NDV3) {
reg = AT_XDMAC_CNDC_NDVIEW_NDV3;
} else {
/*
* No need to write AT_XDMAC_CC reg, it will be done when the
......@@ -465,6 +471,33 @@ static struct at_xdmac_desc *at_xdmac_get_desc(struct at_xdmac_chan *atchan)
return desc;
}
static void at_xdmac_queue_desc(struct dma_chan *chan,
struct at_xdmac_desc *prev,
struct at_xdmac_desc *desc)
{
if (!prev || !desc)
return;
prev->lld.mbr_nda = desc->tx_dma_desc.phys;
prev->lld.mbr_ubc |= AT_XDMAC_MBR_UBC_NDE;
dev_dbg(chan2dev(chan), "%s: chain lld: prev=0x%p, mbr_nda=%pad\n",
__func__, prev, &prev->lld.mbr_nda);
}
static inline void at_xdmac_increment_block_count(struct dma_chan *chan,
struct at_xdmac_desc *desc)
{
if (!desc)
return;
desc->lld.mbr_bc++;
dev_dbg(chan2dev(chan),
"%s: incrementing the block count of the desc 0x%p\n",
__func__, desc);
}
static struct dma_chan *at_xdmac_xlate(struct of_phandle_args *dma_spec,
struct of_dma *of_dma)
{
......@@ -656,19 +689,14 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV2 /* next descriptor view */
| AT_XDMAC_MBR_UBC_NDEN /* next descriptor dst parameter update */
| AT_XDMAC_MBR_UBC_NSEN /* next descriptor src parameter update */
| (i == sg_len - 1 ? 0 : AT_XDMAC_MBR_UBC_NDE) /* descriptor fetch */
| (len >> fixed_dwidth); /* microblock length */
dev_dbg(chan2dev(chan),
"%s: lld: mbr_sa=%pad, mbr_da=%pad, mbr_ubc=0x%08x\n",
__func__, &desc->lld.mbr_sa, &desc->lld.mbr_da, desc->lld.mbr_ubc);
/* Chain lld. */
if (prev) {
prev->lld.mbr_nda = desc->tx_dma_desc.phys;
dev_dbg(chan2dev(chan),
"%s: chain lld: prev=0x%p, mbr_nda=%pad\n",
__func__, prev, &prev->lld.mbr_nda);
}
if (prev)
at_xdmac_queue_desc(chan, prev, desc);
prev = desc;
if (!first)
......@@ -748,7 +776,6 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV1
| AT_XDMAC_MBR_UBC_NDEN
| AT_XDMAC_MBR_UBC_NSEN
| AT_XDMAC_MBR_UBC_NDE
| period_len >> at_xdmac_get_dwidth(desc->lld.mbr_cfg);
dev_dbg(chan2dev(chan),
......@@ -756,12 +783,8 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
__func__, &desc->lld.mbr_sa, &desc->lld.mbr_da, desc->lld.mbr_ubc);
/* Chain lld. */
if (prev) {
prev->lld.mbr_nda = desc->tx_dma_desc.phys;
dev_dbg(chan2dev(chan),
"%s: chain lld: prev=0x%p, mbr_nda=%pad\n",
__func__, prev, &prev->lld.mbr_nda);
}
if (prev)
at_xdmac_queue_desc(chan, prev, desc);
prev = desc;
if (!first)
......@@ -783,6 +806,215 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
return &first->tx_dma_desc;
}
static inline u32 at_xdmac_align_width(struct dma_chan *chan, dma_addr_t addr)
{
u32 width;
/*
* Check address alignment to select the greater data width we
* can use.
*
* Some XDMAC implementations don't provide dword transfer, in
* this case selecting dword has the same behavior as
* selecting word transfers.
*/
if (!(addr & 7)) {
width = AT_XDMAC_CC_DWIDTH_DWORD;
dev_dbg(chan2dev(chan), "%s: dwidth: double word\n", __func__);
} else if (!(addr & 3)) {
width = AT_XDMAC_CC_DWIDTH_WORD;
dev_dbg(chan2dev(chan), "%s: dwidth: word\n", __func__);
} else if (!(addr & 1)) {
width = AT_XDMAC_CC_DWIDTH_HALFWORD;
dev_dbg(chan2dev(chan), "%s: dwidth: half word\n", __func__);
} else {
width = AT_XDMAC_CC_DWIDTH_BYTE;
dev_dbg(chan2dev(chan), "%s: dwidth: byte\n", __func__);
}
return width;
}
static struct at_xdmac_desc *
at_xdmac_interleaved_queue_desc(struct dma_chan *chan,
struct at_xdmac_chan *atchan,
struct at_xdmac_desc *prev,
dma_addr_t src, dma_addr_t dst,
struct dma_interleaved_template *xt,
struct data_chunk *chunk)
{
struct at_xdmac_desc *desc;
u32 dwidth;
unsigned long flags;
size_t ublen;
/*
* WARNING: The channel configuration is set here since there is no
* dmaengine_slave_config call in this case. Moreover we don't know the
* direction, it involves we can't dynamically set the source and dest
* interface so we have to use the same one. Only interface 0 allows EBI
* access. Hopefully we can access DDR through both ports (at least on
* SAMA5D4x), so we can use the same interface for source and dest,
* that solves the fact we don't know the direction.
*/
u32 chan_cc = AT_XDMAC_CC_DIF(0)
| AT_XDMAC_CC_SIF(0)
| AT_XDMAC_CC_MBSIZE_SIXTEEN
| AT_XDMAC_CC_TYPE_MEM_TRAN;
dwidth = at_xdmac_align_width(chan, src | dst | chunk->size);
if (chunk->size >= (AT_XDMAC_MBR_UBC_UBLEN_MAX << dwidth)) {
dev_dbg(chan2dev(chan),
"%s: chunk too big (%d, max size %lu)...\n",
__func__, chunk->size,
AT_XDMAC_MBR_UBC_UBLEN_MAX << dwidth);
return NULL;
}
if (prev)
dev_dbg(chan2dev(chan),
"Adding items at the end of desc 0x%p\n", prev);
if (xt->src_inc) {
if (xt->src_sgl)
chan_cc |= AT_XDMAC_CC_SAM_UBS_DS_AM;
else
chan_cc |= AT_XDMAC_CC_SAM_INCREMENTED_AM;
}
if (xt->dst_inc) {
if (xt->dst_sgl)
chan_cc |= AT_XDMAC_CC_DAM_UBS_DS_AM;
else
chan_cc |= AT_XDMAC_CC_DAM_INCREMENTED_AM;
}
spin_lock_irqsave(&atchan->lock, flags);
desc = at_xdmac_get_desc(atchan);
spin_unlock_irqrestore(&atchan->lock, flags);
if (!desc) {
dev_err(chan2dev(chan), "can't get descriptor\n");
return NULL;
}
chan_cc |= AT_XDMAC_CC_DWIDTH(dwidth);
ublen = chunk->size >> dwidth;
desc->lld.mbr_sa = src;
desc->lld.mbr_da = dst;
desc->lld.mbr_sus = dmaengine_get_src_icg(xt, chunk);
desc->lld.mbr_dus = dmaengine_get_dst_icg(xt, chunk);
desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV3
| AT_XDMAC_MBR_UBC_NDEN
| AT_XDMAC_MBR_UBC_NSEN
| ublen;
desc->lld.mbr_cfg = chan_cc;
dev_dbg(chan2dev(chan),
"%s: lld: mbr_sa=0x%08x, mbr_da=0x%08x, mbr_ubc=0x%08x, mbr_cfg=0x%08x\n",
__func__, desc->lld.mbr_sa, desc->lld.mbr_da,
desc->lld.mbr_ubc, desc->lld.mbr_cfg);
/* Chain lld. */
if (prev)
at_xdmac_queue_desc(chan, prev, desc);
return desc;
}
static struct dma_async_tx_descriptor *
at_xdmac_prep_interleaved(struct dma_chan *chan,
struct dma_interleaved_template *xt,
unsigned long flags)
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac_desc *prev = NULL, *first = NULL;
struct data_chunk *chunk, *prev_chunk = NULL;
dma_addr_t dst_addr, src_addr;
size_t dst_skip, src_skip, len = 0;
size_t prev_dst_icg = 0, prev_src_icg = 0;
int i;
if (!xt || (xt->numf != 1) || (xt->dir != DMA_MEM_TO_MEM))
return NULL;
dev_dbg(chan2dev(chan), "%s: src=0x%08x, dest=0x%08x, numf=%d, frame_size=%d, flags=0x%lx\n",
__func__, xt->src_start, xt->dst_start, xt->numf,
xt->frame_size, flags);
src_addr = xt->src_start;
dst_addr = xt->dst_start;
for (i = 0; i < xt->frame_size; i++) {
struct at_xdmac_desc *desc;
size_t src_icg, dst_icg;
chunk = xt->sgl + i;
dst_icg = dmaengine_get_dst_icg(xt, chunk);
src_icg = dmaengine_get_src_icg(xt, chunk);
src_skip = chunk->size + src_icg;
dst_skip = chunk->size + dst_icg;
dev_dbg(chan2dev(chan),
"%s: chunk size=%d, src icg=%d, dst icg=%d\n",
__func__, chunk->size, src_icg, dst_icg);
/*
* Handle the case where we just have the same
* transfer to setup, we can just increase the
* block number and reuse the same descriptor.
*/
if (prev_chunk && prev &&
(prev_chunk->size == chunk->size) &&
(prev_src_icg == src_icg) &&
(prev_dst_icg == dst_icg)) {
dev_dbg(chan2dev(chan),
"%s: same configuration that the previous chunk, merging the descriptors...\n",
__func__);
at_xdmac_increment_block_count(chan, prev);
continue;
}
desc = at_xdmac_interleaved_queue_desc(chan, atchan,
prev,
src_addr, dst_addr,
xt, chunk);
if (!desc) {
list_splice_init(&first->descs_list,
&atchan->free_descs_list);
return NULL;
}
if (!first)
first = desc;
dev_dbg(chan2dev(chan), "%s: add desc 0x%p to descs_list 0x%p\n",
__func__, desc, first);
list_add_tail(&desc->desc_node, &first->descs_list);
if (xt->src_sgl)
src_addr += src_skip;
if (xt->dst_sgl)
dst_addr += dst_skip;
len += chunk->size;
prev_chunk = chunk;
prev_dst_icg = dst_icg;
prev_src_icg = src_icg;
prev = desc;
}
first->tx_dma_desc.cookie = -EBUSY;
first->tx_dma_desc.flags = flags;
first->xfer_size = len;
return &first->tx_dma_desc;
}
static struct dma_async_tx_descriptor *
at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
size_t len, unsigned long flags)
......@@ -814,24 +1046,7 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
if (unlikely(!len))
return NULL;
/*
* Check address alignment to select the greater data width we can use.
* Some XDMAC implementations don't provide dword transfer, in this
* case selecting dword has the same behavior as selecting word transfers.
*/
if (!((src_addr | dst_addr) & 7)) {
dwidth = AT_XDMAC_CC_DWIDTH_DWORD;
dev_dbg(chan2dev(chan), "%s: dwidth: double word\n", __func__);
} else if (!((src_addr | dst_addr) & 3)) {
dwidth = AT_XDMAC_CC_DWIDTH_WORD;
dev_dbg(chan2dev(chan), "%s: dwidth: word\n", __func__);
} else if (!((src_addr | dst_addr) & 1)) {
dwidth = AT_XDMAC_CC_DWIDTH_HALFWORD;
dev_dbg(chan2dev(chan), "%s: dwidth: half word\n", __func__);
} else {
dwidth = AT_XDMAC_CC_DWIDTH_BYTE;
dev_dbg(chan2dev(chan), "%s: dwidth: byte\n", __func__);
}
dwidth = at_xdmac_align_width(chan, src_addr | dst_addr);
/* Prepare descriptors. */
while (remaining_size) {
......@@ -861,19 +1076,8 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
dev_dbg(chan2dev(chan), "%s: xfer_size=%zu\n", __func__, xfer_size);
/* Check remaining length and change data width if needed. */
if (!((src_addr | dst_addr | xfer_size) & 7)) {
dwidth = AT_XDMAC_CC_DWIDTH_DWORD;
dev_dbg(chan2dev(chan), "%s: dwidth: double word\n", __func__);
} else if (!((src_addr | dst_addr | xfer_size) & 3)) {
dwidth = AT_XDMAC_CC_DWIDTH_WORD;
dev_dbg(chan2dev(chan), "%s: dwidth: word\n", __func__);
} else if (!((src_addr | dst_addr | xfer_size) & 1)) {
dwidth = AT_XDMAC_CC_DWIDTH_HALFWORD;
dev_dbg(chan2dev(chan), "%s: dwidth: half word\n", __func__);
} else if ((src_addr | dst_addr | xfer_size) & 1) {
dwidth = AT_XDMAC_CC_DWIDTH_BYTE;
dev_dbg(chan2dev(chan), "%s: dwidth: byte\n", __func__);
}
dwidth = at_xdmac_align_width(chan,
src_addr | dst_addr | xfer_size);
chan_cc |= AT_XDMAC_CC_DWIDTH(dwidth);
ublen = xfer_size >> dwidth;
......@@ -884,7 +1088,6 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV2
| AT_XDMAC_MBR_UBC_NDEN
| AT_XDMAC_MBR_UBC_NSEN
| (remaining_size ? AT_XDMAC_MBR_UBC_NDE : 0)
| ublen;
desc->lld.mbr_cfg = chan_cc;
......@@ -893,12 +1096,8 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
__func__, &desc->lld.mbr_sa, &desc->lld.mbr_da, desc->lld.mbr_ubc, desc->lld.mbr_cfg);
/* Chain lld. */
if (prev) {
prev->lld.mbr_nda = desc->tx_dma_desc.phys;
dev_dbg(chan2dev(chan),
"%s: chain lld: prev=0x%p, mbr_nda=0x%08x\n",
__func__, prev, prev->lld.mbr_nda);
}
if (prev)
at_xdmac_queue_desc(chan, prev, desc);
prev = desc;
if (!first)
......@@ -915,6 +1114,93 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
return &first->tx_dma_desc;
}
static struct at_xdmac_desc *at_xdmac_memset_create_desc(struct dma_chan *chan,
struct at_xdmac_chan *atchan,
dma_addr_t dst_addr,
size_t len,
int value)
{
struct at_xdmac_desc *desc;
unsigned long flags;
size_t ublen;
u32 dwidth;
/*
* WARNING: The channel configuration is set here since there is no
* dmaengine_slave_config call in this case. Moreover we don't know the
* direction, it involves we can't dynamically set the source and dest
* interface so we have to use the same one. Only interface 0 allows EBI
* access. Hopefully we can access DDR through both ports (at least on
* SAMA5D4x), so we can use the same interface for source and dest,
* that solves the fact we don't know the direction.
*/
u32 chan_cc = AT_XDMAC_CC_DAM_INCREMENTED_AM
| AT_XDMAC_CC_SAM_INCREMENTED_AM
| AT_XDMAC_CC_DIF(0)
| AT_XDMAC_CC_SIF(0)
| AT_XDMAC_CC_MBSIZE_SIXTEEN
| AT_XDMAC_CC_MEMSET_HW_MODE
| AT_XDMAC_CC_TYPE_MEM_TRAN;
dwidth = at_xdmac_align_width(chan, dst_addr);
if (len >= (AT_XDMAC_MBR_UBC_UBLEN_MAX << dwidth)) {
dev_err(chan2dev(chan),
"%s: Transfer too large, aborting...\n",
__func__);
return NULL;
}
spin_lock_irqsave(&atchan->lock, flags);
desc = at_xdmac_get_desc(atchan);
spin_unlock_irqrestore(&atchan->lock, flags);
if (!desc) {
dev_err(chan2dev(chan), "can't get descriptor\n");
return NULL;
}
chan_cc |= AT_XDMAC_CC_DWIDTH(dwidth);
ublen = len >> dwidth;
desc->lld.mbr_da = dst_addr;
desc->lld.mbr_ds = value;
desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV3
| AT_XDMAC_MBR_UBC_NDEN
| AT_XDMAC_MBR_UBC_NSEN
| ublen;
desc->lld.mbr_cfg = chan_cc;
dev_dbg(chan2dev(chan),
"%s: lld: mbr_da=0x%08x, mbr_ds=0x%08x, mbr_ubc=0x%08x, mbr_cfg=0x%08x\n",
__func__, desc->lld.mbr_da, desc->lld.mbr_ds, desc->lld.mbr_ubc,
desc->lld.mbr_cfg);
return desc;
}
struct dma_async_tx_descriptor *
at_xdmac_prep_dma_memset(struct dma_chan *chan, dma_addr_t dest, int value,
size_t len, unsigned long flags)
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac_desc *desc;
dev_dbg(chan2dev(chan), "%s: dest=0x%08x, len=%d, pattern=0x%x, flags=0x%lx\n",
__func__, dest, len, value, flags);
if (unlikely(!len))
return NULL;
desc = at_xdmac_memset_create_desc(chan, atchan, dest, len, value);
list_add_tail(&desc->desc_node, &desc->descs_list);
desc->tx_dma_desc.cookie = -EBUSY;
desc->tx_dma_desc.flags = flags;
desc->xfer_size = len;
return &desc->tx_dma_desc;
}
static enum dma_status
at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
struct dma_tx_state *txstate)
......@@ -1445,7 +1731,9 @@ static int at_xdmac_probe(struct platform_device *pdev)
}
dma_cap_set(DMA_CYCLIC, atxdmac->dma.cap_mask);
dma_cap_set(DMA_INTERLEAVE, atxdmac->dma.cap_mask);
dma_cap_set(DMA_MEMCPY, atxdmac->dma.cap_mask);
dma_cap_set(DMA_MEMSET, atxdmac->dma.cap_mask);
dma_cap_set(DMA_SLAVE, atxdmac->dma.cap_mask);
/*
* Without DMA_PRIVATE the driver is not able to allocate more than
......@@ -1458,7 +1746,9 @@ static int at_xdmac_probe(struct platform_device *pdev)
atxdmac->dma.device_tx_status = at_xdmac_tx_status;
atxdmac->dma.device_issue_pending = at_xdmac_issue_pending;
atxdmac->dma.device_prep_dma_cyclic = at_xdmac_prep_dma_cyclic;
atxdmac->dma.device_prep_interleaved_dma = at_xdmac_prep_interleaved;
atxdmac->dma.device_prep_dma_memcpy = at_xdmac_prep_dma_memcpy;
atxdmac->dma.device_prep_dma_memset = at_xdmac_prep_dma_memset;
atxdmac->dma.device_prep_slave_sg = at_xdmac_prep_slave_sg;
atxdmac->dma.device_config = at_xdmac_device_config;
atxdmac->dma.device_pause = at_xdmac_device_pause;
......
......@@ -267,6 +267,13 @@ static void dma_chan_put(struct dma_chan *chan)
/* This channel is not in use anymore, free it */
if (!chan->client_count && chan->device->device_free_chan_resources)
chan->device->device_free_chan_resources(chan);
/* If the channel is used via a DMA request router, free the mapping */
if (chan->router && chan->router->route_free) {
chan->router->route_free(chan->router->dev, chan->route_data);
chan->router = NULL;
chan->route_data = NULL;
}
}
enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie)
......@@ -536,7 +543,7 @@ static struct dma_chan *private_candidate(const dma_cap_mask_t *mask,
}
/**
* dma_request_slave_channel - try to get specific channel exclusively
* dma_get_slave_channel - try to get specific channel exclusively
* @chan: target channel
*/
struct dma_chan *dma_get_slave_channel(struct dma_chan *chan)
......@@ -648,7 +655,7 @@ struct dma_chan *__dma_request_channel(const dma_cap_mask_t *mask,
EXPORT_SYMBOL_GPL(__dma_request_channel);
/**
* dma_request_slave_channel - try to allocate an exclusive slave channel
* dma_request_slave_channel_reason - try to allocate an exclusive slave channel
* @dev: pointer to client device structure
* @name: slave channel name
*
......@@ -836,6 +843,8 @@ int dma_async_device_register(struct dma_device *device)
!device->device_prep_dma_pq);
BUG_ON(dma_has_cap(DMA_PQ_VAL, device->cap_mask) &&
!device->device_prep_dma_pq_val);
BUG_ON(dma_has_cap(DMA_MEMSET, device->cap_mask) &&
!device->device_prep_dma_memset);
BUG_ON(dma_has_cap(DMA_INTERRUPT, device->cap_mask) &&
!device->device_prep_dma_interrupt);
BUG_ON(dma_has_cap(DMA_SG, device->cap_mask) &&
......
......@@ -1364,7 +1364,7 @@ static int __init ep93xx_dma_probe(struct platform_device *pdev)
return ret;
}
static struct platform_device_id ep93xx_dma_driver_ids[] = {
static const struct platform_device_id ep93xx_dma_driver_ids[] = {
{ "ep93xx-dma-m2p", 0 },
{ "ep93xx-dma-m2m", 1 },
{ },
......
......@@ -881,10 +881,6 @@ static int fsl_edma_probe(struct platform_device *pdev)
}
ret = fsl_edma_irq_init(pdev, fsl_edma);
if (ret)
return ret;
fsl_edma->big_endian = of_property_read_bool(np, "big-endian");
INIT_LIST_HEAD(&fsl_edma->dma_dev.channels);
......@@ -900,6 +896,11 @@ static int fsl_edma_probe(struct platform_device *pdev)
fsl_edma_chan_mux(fsl_chan, 0, false);
}
edma_writel(fsl_edma, ~0, fsl_edma->membase + EDMA_INTR);
ret = fsl_edma_irq_init(pdev, fsl_edma);
if (ret)
return ret;
dma_cap_set(DMA_PRIVATE, fsl_edma->dma_dev.cap_mask);
dma_cap_set(DMA_SLAVE, fsl_edma->dma_dev.cap_mask);
dma_cap_set(DMA_CYCLIC, fsl_edma->dma_dev.cap_mask);
......
......@@ -193,7 +193,7 @@ struct imxdma_filter_data {
int request;
};
static struct platform_device_id imx_dma_devtype[] = {
static const struct platform_device_id imx_dma_devtype[] = {
{
.name = "imx1-dma",
.driver_data = IMX1_DMA,
......
......@@ -420,7 +420,7 @@ static struct sdma_driver_data sdma_imx6q = {
.script_addrs = &sdma_script_imx6q,
};
static struct platform_device_id sdma_devtypes[] = {
static const struct platform_device_id sdma_devtypes[] = {
{
.name = "imx25-sdma",
.driver_data = (unsigned long)&sdma_imx25,
......
此差异已折叠。
......@@ -19,7 +19,7 @@
#include <linux/dmaengine.h>
#include <linux/interrupt.h>
#define MV_XOR_POOL_SIZE PAGE_SIZE
#define MV_XOR_POOL_SIZE (MV_XOR_SLOT_SIZE * 3072)
#define MV_XOR_SLOT_SIZE 64
#define MV_XOR_THRESHOLD 1
#define MV_XOR_MAX_CHANNELS 2
......@@ -30,7 +30,13 @@
/* Values for the XOR_CONFIG register */
#define XOR_OPERATION_MODE_XOR 0
#define XOR_OPERATION_MODE_MEMCPY 2
#define XOR_OPERATION_MODE_IN_DESC 7
#define XOR_DESCRIPTOR_SWAP BIT(14)
#define XOR_DESC_SUCCESS 0x40000000
#define XOR_DESC_OPERATION_XOR (0 << 24)
#define XOR_DESC_OPERATION_CRC32C (1 << 24)
#define XOR_DESC_OPERATION_MEMCPY (2 << 24)
#define XOR_DESC_DMA_OWNED BIT(31)
#define XOR_DESC_EOD_INT_EN BIT(31)
......@@ -88,13 +94,14 @@ struct mv_xor_device {
* @mmr_base: memory mapped register base
* @idx: the index of the xor channel
* @chain: device chain view of the descriptors
* @free_slots: free slots usable by the channel
* @allocated_slots: slots allocated by the driver
* @completed_slots: slots completed by HW but still need to be acked
* @device: parent device
* @common: common dmaengine channel object members
* @last_used: place holder for allocation to continue from where it left off
* @all_slots: complete domain of slots usable by the channel
* @slots_allocated: records the actual size of the descriptor slot pool
* @irq_tasklet: bottom half where mv_xor_slot_cleanup runs
* @op_in_desc: new mode of driver, each op is writen to descriptor.
*/
struct mv_xor_chan {
int pending;
......@@ -105,16 +112,17 @@ struct mv_xor_chan {
int irq;
enum dma_transaction_type current_type;
struct list_head chain;
struct list_head free_slots;
struct list_head allocated_slots;
struct list_head completed_slots;
dma_addr_t dma_desc_pool;
void *dma_desc_pool_virt;
size_t pool_size;
struct dma_device dmadev;
struct dma_chan dmachan;
struct mv_xor_desc_slot *last_used;
struct list_head all_slots;
int slots_allocated;
struct tasklet_struct irq_tasklet;
int op_in_desc;
char dummy_src[MV_XOR_MIN_BYTE_COUNT];
char dummy_dst[MV_XOR_MIN_BYTE_COUNT];
dma_addr_t dummy_src_addr, dummy_dst_addr;
......@@ -122,9 +130,7 @@ struct mv_xor_chan {
/**
* struct mv_xor_desc_slot - software descriptor
* @slot_node: node on the mv_xor_chan.all_slots list
* @chain_node: node on the mv_xor_chan.chain list
* @completed_node: node on the mv_xor_chan.completed_slots list
* @node: node on the mv_xor_chan lists
* @hw_desc: virtual address of the hardware descriptor chain
* @phys: hardware address of the hardware descriptor chain
* @slot_used: slot in use or not
......@@ -133,12 +139,9 @@ struct mv_xor_chan {
* @async_tx: support for the async_tx api
*/
struct mv_xor_desc_slot {
struct list_head slot_node;
struct list_head chain_node;
struct list_head completed_node;
struct list_head node;
enum dma_transaction_type type;
void *hw_desc;
u16 slot_used;
u16 idx;
struct dma_async_tx_descriptor async_tx;
};
......
......@@ -170,7 +170,7 @@ static struct mxs_dma_type mxs_dma_types[] = {
}
};
static struct platform_device_id mxs_dma_ids[] = {
static const struct platform_device_id mxs_dma_ids[] = {
{
.name = "imx23-dma-apbh",
.driver_data = (kernel_ulong_t) &mxs_dma_types[0],
......
......@@ -1455,7 +1455,7 @@ static int nbpf_remove(struct platform_device *pdev)
return 0;
}
static struct platform_device_id nbpf_ids[] = {
static const struct platform_device_id nbpf_ids[] = {
{"nbpfaxi64dmac1b4", (kernel_ulong_t)&nbpf_cfg[NBPF1B4]},
{"nbpfaxi64dmac1b8", (kernel_ulong_t)&nbpf_cfg[NBPF1B8]},
{"nbpfaxi64dmac1b16", (kernel_ulong_t)&nbpf_cfg[NBPF1B16]},
......
......@@ -44,6 +44,50 @@ static struct of_dma *of_dma_find_controller(struct of_phandle_args *dma_spec)
return NULL;
}
/**
* of_dma_router_xlate - translation function for router devices
* @dma_spec: pointer to DMA specifier as found in the device tree
* @of_dma: pointer to DMA controller data (router information)
*
* The function creates new dma_spec to be passed to the router driver's
* of_dma_route_allocate() function to prepare a dma_spec which will be used
* to request channel from the real DMA controller.
*/
static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct dma_chan *chan;
struct of_dma *ofdma_target;
struct of_phandle_args dma_spec_target;
void *route_data;
/* translate the request for the real DMA controller */
memcpy(&dma_spec_target, dma_spec, sizeof(dma_spec_target));
route_data = ofdma->of_dma_route_allocate(&dma_spec_target, ofdma);
if (IS_ERR(route_data))
return NULL;
ofdma_target = of_dma_find_controller(&dma_spec_target);
if (!ofdma_target)
return NULL;
chan = ofdma_target->of_dma_xlate(&dma_spec_target, ofdma_target);
if (chan) {
chan->router = ofdma->dma_router;
chan->route_data = route_data;
} else {
ofdma->dma_router->route_free(ofdma->dma_router->dev,
route_data);
}
/*
* Need to put the node back since the ofdma->of_dma_route_allocate
* has taken it for generating the new, translated dma_spec
*/
of_node_put(dma_spec_target.np);
return chan;
}
/**
* of_dma_controller_register - Register a DMA controller to DT DMA helpers
* @np: device node of DMA controller
......@@ -109,6 +153,51 @@ void of_dma_controller_free(struct device_node *np)
}
EXPORT_SYMBOL_GPL(of_dma_controller_free);
/**
* of_dma_router_register - Register a DMA router to DT DMA helpers as a
* controller
* @np: device node of DMA router
* @of_dma_route_allocate: setup function for the router which need to
* modify the dma_spec for the DMA controller to
* use and to set up the requested route.
* @dma_router: pointer to dma_router structure to be used when
* the route need to be free up.
*
* Returns 0 on success or appropriate errno value on error.
*
* Allocated memory should be freed with appropriate of_dma_controller_free()
* call.
*/
int of_dma_router_register(struct device_node *np,
void *(*of_dma_route_allocate)
(struct of_phandle_args *, struct of_dma *),
struct dma_router *dma_router)
{
struct of_dma *ofdma;
if (!np || !of_dma_route_allocate || !dma_router) {
pr_err("%s: not enough information provided\n", __func__);
return -EINVAL;
}
ofdma = kzalloc(sizeof(*ofdma), GFP_KERNEL);
if (!ofdma)
return -ENOMEM;
ofdma->of_node = np;
ofdma->of_dma_xlate = of_dma_router_xlate;
ofdma->of_dma_route_allocate = of_dma_route_allocate;
ofdma->dma_router = dma_router;
/* Now queue of_dma controller structure in list */
mutex_lock(&of_dma_lock);
list_add_tail(&ofdma->of_dma_controllers, &of_dma_list);
mutex_unlock(&of_dma_lock);
return 0;
}
EXPORT_SYMBOL_GPL(of_dma_router_register);
/**
* of_dma_match_channel - Check if a DMA specifier matches name
* @np: device node to look for DMA channels
......
......@@ -22,6 +22,9 @@
#include "virt-dma.h"
#define OMAP_SDMA_REQUESTS 127
#define OMAP_SDMA_CHANNELS 32
struct omap_dmadev {
struct dma_device ddev;
spinlock_t lock;
......@@ -31,9 +34,10 @@ struct omap_dmadev {
const struct omap_dma_reg *reg_map;
struct omap_system_dma_plat_info *plat;
bool legacy;
unsigned dma_requests;
spinlock_t irq_lock;
uint32_t irq_enable_mask;
struct omap_chan *lch_map[32];
struct omap_chan *lch_map[OMAP_SDMA_CHANNELS];
};
struct omap_chan {
......@@ -362,7 +366,7 @@ static void omap_dma_start_sg(struct omap_chan *c, struct omap_desc *d,
struct omap_sg *sg = d->sg + idx;
unsigned cxsa, cxei, cxfi;
if (d->dir == DMA_DEV_TO_MEM) {
if (d->dir == DMA_DEV_TO_MEM || d->dir == DMA_MEM_TO_MEM) {
cxsa = CDSA;
cxei = CDEI;
cxfi = CDFI;
......@@ -408,7 +412,7 @@ static void omap_dma_start_desc(struct omap_chan *c)
if (dma_omap1())
omap_dma_chan_write(c, CCR2, d->ccr >> 16);
if (d->dir == DMA_DEV_TO_MEM) {
if (d->dir == DMA_DEV_TO_MEM || d->dir == DMA_MEM_TO_MEM) {
cxsa = CSSA;
cxei = CSEI;
cxfi = CSFI;
......@@ -589,6 +593,7 @@ static void omap_dma_free_chan_resources(struct dma_chan *chan)
omap_free_dma(c->dma_ch);
dev_dbg(od->ddev.dev, "freeing channel for %u\n", c->dma_sig);
c->dma_sig = 0;
}
static size_t omap_dma_sg_size(struct omap_sg *sg)
......@@ -948,6 +953,51 @@ static struct dma_async_tx_descriptor *omap_dma_prep_dma_cyclic(
return vchan_tx_prep(&c->vc, &d->vd, flags);
}
static struct dma_async_tx_descriptor *omap_dma_prep_dma_memcpy(
struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
size_t len, unsigned long tx_flags)
{
struct omap_chan *c = to_omap_dma_chan(chan);
struct omap_desc *d;
uint8_t data_type;
d = kzalloc(sizeof(*d) + sizeof(d->sg[0]), GFP_ATOMIC);
if (!d)
return NULL;
data_type = __ffs((src | dest | len));
if (data_type > CSDP_DATA_TYPE_32)
data_type = CSDP_DATA_TYPE_32;
d->dir = DMA_MEM_TO_MEM;
d->dev_addr = src;
d->fi = 0;
d->es = data_type;
d->sg[0].en = len / BIT(data_type);
d->sg[0].fn = 1;
d->sg[0].addr = dest;
d->sglen = 1;
d->ccr = c->ccr;
d->ccr |= CCR_DST_AMODE_POSTINC | CCR_SRC_AMODE_POSTINC;
d->cicr = CICR_DROP_IE;
if (tx_flags & DMA_PREP_INTERRUPT)
d->cicr |= CICR_FRAME_IE;
d->csdp = data_type;
if (dma_omap1()) {
d->cicr |= CICR_TOUT_IE;
d->csdp |= CSDP_DST_PORT_EMIFF | CSDP_SRC_PORT_EMIFF;
} else {
d->csdp |= CSDP_DST_PACKED | CSDP_SRC_PACKED;
d->cicr |= CICR_MISALIGNED_ERR_IE | CICR_TRANS_ERR_IE;
d->csdp |= CSDP_DST_BURST_64 | CSDP_SRC_BURST_64;
}
return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
}
static int omap_dma_slave_config(struct dma_chan *chan, struct dma_slave_config *cfg)
{
struct omap_chan *c = to_omap_dma_chan(chan);
......@@ -1037,7 +1087,7 @@ static int omap_dma_resume(struct dma_chan *chan)
return 0;
}
static int omap_dma_chan_init(struct omap_dmadev *od, int dma_sig)
static int omap_dma_chan_init(struct omap_dmadev *od)
{
struct omap_chan *c;
......@@ -1046,7 +1096,6 @@ static int omap_dma_chan_init(struct omap_dmadev *od, int dma_sig)
return -ENOMEM;
c->reg_map = od->reg_map;
c->dma_sig = dma_sig;
c->vc.desc_free = omap_dma_desc_free;
vchan_init(&c->vc, &od->ddev);
INIT_LIST_HEAD(&c->node);
......@@ -1094,12 +1143,14 @@ static int omap_dma_probe(struct platform_device *pdev)
dma_cap_set(DMA_SLAVE, od->ddev.cap_mask);
dma_cap_set(DMA_CYCLIC, od->ddev.cap_mask);
dma_cap_set(DMA_MEMCPY, od->ddev.cap_mask);
od->ddev.device_alloc_chan_resources = omap_dma_alloc_chan_resources;
od->ddev.device_free_chan_resources = omap_dma_free_chan_resources;
od->ddev.device_tx_status = omap_dma_tx_status;
od->ddev.device_issue_pending = omap_dma_issue_pending;
od->ddev.device_prep_slave_sg = omap_dma_prep_slave_sg;
od->ddev.device_prep_dma_cyclic = omap_dma_prep_dma_cyclic;
od->ddev.device_prep_dma_memcpy = omap_dma_prep_dma_memcpy;
od->ddev.device_config = omap_dma_slave_config;
od->ddev.device_pause = omap_dma_pause;
od->ddev.device_resume = omap_dma_resume;
......@@ -1116,8 +1167,17 @@ static int omap_dma_probe(struct platform_device *pdev)
tasklet_init(&od->task, omap_dma_sched, (unsigned long)od);
for (i = 0; i < 127; i++) {
rc = omap_dma_chan_init(od, i);
od->dma_requests = OMAP_SDMA_REQUESTS;
if (pdev->dev.of_node && of_property_read_u32(pdev->dev.of_node,
"dma-requests",
&od->dma_requests)) {
dev_info(&pdev->dev,
"Missing dma-requests property, using %u.\n",
OMAP_SDMA_REQUESTS);
}
for (i = 0; i < OMAP_SDMA_CHANNELS; i++) {
rc = omap_dma_chan_init(od);
if (rc) {
omap_dma_free(od);
return rc;
......@@ -1208,10 +1268,14 @@ static struct platform_driver omap_dma_driver = {
bool omap_dma_filter_fn(struct dma_chan *chan, void *param)
{
if (chan->device->dev->driver == &omap_dma_driver.driver) {
struct omap_dmadev *od = to_omap_dma_dev(chan->device);
struct omap_chan *c = to_omap_dma_chan(chan);
unsigned req = *(unsigned *)param;
return req == c->dma_sig;
if (req <= od->dma_requests) {
c->dma_sig = req;
return true;
}
}
return false;
}
......
......@@ -1424,8 +1424,8 @@ static int pl330_submit_req(struct pl330_thread *thrd,
goto xfer_exit;
if (ret > pl330->mcbufsz / 2) {
dev_info(pl330->ddma.dev, "%s:%d Trying increasing mcbufsz\n",
__func__, __LINE__);
dev_info(pl330->ddma.dev, "%s:%d Try increasing mcbufsz (%i/%i)\n",
__func__, __LINE__, ret, pl330->mcbufsz / 2);
ret = -ENOMEM;
goto xfer_exit;
}
......@@ -2584,12 +2584,14 @@ pl330_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
{
struct dma_pl330_desc *desc;
struct dma_pl330_chan *pch = to_pchan(chan);
struct pl330_dmac *pl330 = pch->dmac;
struct pl330_dmac *pl330;
int burst;
if (unlikely(!pch || !len))
return NULL;
pl330 = pch->dmac;
desc = __pl330_prep_dma_memcpy(pch, dst, src, len);
if (!desc)
return NULL;
......
此差异已折叠。
......@@ -1168,7 +1168,7 @@ static struct soc_data soc_s3c2443 = {
.has_clocks = true,
};
static struct platform_device_id s3c24xx_dma_driver_ids[] = {
static const struct platform_device_id s3c24xx_dma_driver_ids[] = {
{
.name = "s3c2410-dma",
.driver_data = (kernel_ulong_t)&soc_s3c2410,
......
......@@ -183,7 +183,7 @@ struct rcar_dmac {
unsigned int n_channels;
struct rcar_dmac_chan *channels;
unsigned long modules[256 / BITS_PER_LONG];
DECLARE_BITMAP(modules, 256);
};
#define to_rcar_dmac(d) container_of(d, struct rcar_dmac, engine)
......
......@@ -11,7 +11,7 @@
#include "shdma-arm.h"
const unsigned int dma_ts_shift[] = SH_DMAE_TS_SHIFT;
static const unsigned int dma_ts_shift[] = SH_DMAE_TS_SHIFT;
static const struct sh_dmae_slave_config dma_slaves[] = {
{
......
此差异已折叠。
......@@ -891,9 +891,21 @@ static struct sun6i_dma_config sun8i_a23_dma_cfg = {
.nr_max_vchans = 37,
};
/*
* The H3 has 12 physical channels, a maximum DRQ port id of 27,
* and a total of 34 usable source and destination endpoints.
*/
static struct sun6i_dma_config sun8i_h3_dma_cfg = {
.nr_max_channels = 12,
.nr_max_requests = 27,
.nr_max_vchans = 34,
};
static const struct of_device_id sun6i_dma_match[] = {
{ .compatible = "allwinner,sun6i-a31-dma", .data = &sun6i_a31_dma_cfg },
{ .compatible = "allwinner,sun8i-a23-dma", .data = &sun8i_a23_dma_cfg },
{ .compatible = "allwinner,sun8i-h3-dma", .data = &sun8i_h3_dma_cfg },
{ /* sentinel */ }
};
......
/*
* Copyright (C) 2015 Texas Instruments Incorporated - http://www.ti.com
* Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#include <linux/slab.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/list.h>
#include <linux/io.h>
#include <linux/idr.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/of_dma.h>
#define TI_XBAR_OUTPUTS 127
#define TI_XBAR_INPUTS 256
static DEFINE_IDR(map_idr);
struct ti_dma_xbar_data {
void __iomem *iomem;
struct dma_router dmarouter;
u16 safe_val; /* Value to rest the crossbar lines */
u32 xbar_requests; /* number of DMA requests connected to XBAR */
u32 dma_requests; /* number of DMA requests forwarded to DMA */
};
struct ti_dma_xbar_map {
u16 xbar_in;
int xbar_out;
};
static inline void ti_dma_xbar_write(void __iomem *iomem, int xbar, u16 val)
{
writew_relaxed(val, iomem + (xbar * 2));
}
static void ti_dma_xbar_free(struct device *dev, void *route_data)
{
struct ti_dma_xbar_data *xbar = dev_get_drvdata(dev);
struct ti_dma_xbar_map *map = route_data;
dev_dbg(dev, "Unmapping XBAR%u (was routed to %d)\n",
map->xbar_in, map->xbar_out);
ti_dma_xbar_write(xbar->iomem, map->xbar_out, xbar->safe_val);
idr_remove(&map_idr, map->xbar_out);
kfree(map);
}
static void *ti_dma_xbar_route_allocate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct platform_device *pdev = of_find_device_by_node(ofdma->of_node);
struct ti_dma_xbar_data *xbar = platform_get_drvdata(pdev);
struct ti_dma_xbar_map *map;
if (dma_spec->args[0] >= xbar->xbar_requests) {
dev_err(&pdev->dev, "Invalid XBAR request number: %d\n",
dma_spec->args[0]);
return ERR_PTR(-EINVAL);
}
/* The of_node_put() will be done in the core for the node */
dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0);
if (!dma_spec->np) {
dev_err(&pdev->dev, "Can't get DMA master\n");
return ERR_PTR(-EINVAL);
}
map = kzalloc(sizeof(*map), GFP_KERNEL);
if (!map) {
of_node_put(dma_spec->np);
return ERR_PTR(-ENOMEM);
}
map->xbar_out = idr_alloc(&map_idr, NULL, 0, xbar->dma_requests,
GFP_KERNEL);
map->xbar_in = (u16)dma_spec->args[0];
/* The DMA request is 1 based in sDMA */
dma_spec->args[0] = map->xbar_out + 1;
dev_dbg(&pdev->dev, "Mapping XBAR%u to DMA%d\n",
map->xbar_in, map->xbar_out);
ti_dma_xbar_write(xbar->iomem, map->xbar_out, map->xbar_in);
return map;
}
static int ti_dma_xbar_probe(struct platform_device *pdev)
{
struct device_node *node = pdev->dev.of_node;
struct device_node *dma_node;
struct ti_dma_xbar_data *xbar;
struct resource *res;
u32 safe_val;
void __iomem *iomem;
int i, ret;
if (!node)
return -ENODEV;
xbar = devm_kzalloc(&pdev->dev, sizeof(*xbar), GFP_KERNEL);
if (!xbar)
return -ENOMEM;
dma_node = of_parse_phandle(node, "dma-masters", 0);
if (!dma_node) {
dev_err(&pdev->dev, "Can't get DMA master node\n");
return -ENODEV;
}
if (of_property_read_u32(dma_node, "dma-requests",
&xbar->dma_requests)) {
dev_info(&pdev->dev,
"Missing XBAR output information, using %u.\n",
TI_XBAR_OUTPUTS);
xbar->dma_requests = TI_XBAR_OUTPUTS;
}
of_node_put(dma_node);
if (of_property_read_u32(node, "dma-requests", &xbar->xbar_requests)) {
dev_info(&pdev->dev,
"Missing XBAR input information, using %u.\n",
TI_XBAR_INPUTS);
xbar->xbar_requests = TI_XBAR_INPUTS;
}
if (!of_property_read_u32(node, "ti,dma-safe-map", &safe_val))
xbar->safe_val = (u16)safe_val;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res)
return -ENODEV;
iomem = devm_ioremap_resource(&pdev->dev, res);
if (!iomem)
return -ENOMEM;
xbar->iomem = iomem;
xbar->dmarouter.dev = &pdev->dev;
xbar->dmarouter.route_free = ti_dma_xbar_free;
platform_set_drvdata(pdev, xbar);
/* Reset the crossbar */
for (i = 0; i < xbar->dma_requests; i++)
ti_dma_xbar_write(xbar->iomem, i, xbar->safe_val);
ret = of_dma_router_register(node, ti_dma_xbar_route_allocate,
&xbar->dmarouter);
if (ret) {
/* Restore the defaults for the crossbar */
for (i = 0; i < xbar->dma_requests; i++)
ti_dma_xbar_write(xbar->iomem, i, i);
}
return ret;
}
static const struct of_device_id ti_dma_xbar_match[] = {
{ .compatible = "ti,dra7-dma-crossbar" },
{},
};
static struct platform_driver ti_dma_xbar_driver = {
.driver = {
.name = "ti-dma-crossbar",
.of_match_table = of_match_ptr(ti_dma_xbar_match),
},
.probe = ti_dma_xbar_probe,
};
int omap_dmaxbar_init(void)
{
return platform_driver_register(&ti_dma_xbar_driver);
}
arch_initcall(omap_dmaxbar_init);
......@@ -29,7 +29,7 @@ dma_cookie_t vchan_tx_submit(struct dma_async_tx_descriptor *tx)
spin_lock_irqsave(&vc->lock, flags);
cookie = dma_cookie_assign(tx);
list_add_tail(&vd->node, &vc->desc_submitted);
list_move_tail(&vd->node, &vc->desc_submitted);
spin_unlock_irqrestore(&vc->lock, flags);
dev_dbg(vc->chan.device->dev, "vchan %p: txd %p[%x]: submitted\n",
......@@ -83,8 +83,10 @@ static void vchan_complete(unsigned long arg)
cb_data = vd->tx.callback_param;
list_del(&vd->node);
vc->desc_free(vd);
if (async_tx_test_ack(&vd->tx))
list_add(&vd->node, &vc->desc_allocated);
else
vc->desc_free(vd);
if (cb)
cb(cb_data);
......@@ -96,9 +98,13 @@ void vchan_dma_desc_free_list(struct virt_dma_chan *vc, struct list_head *head)
while (!list_empty(head)) {
struct virt_dma_desc *vd = list_first_entry(head,
struct virt_dma_desc, node);
list_del(&vd->node);
dev_dbg(vc->chan.device->dev, "txd %p: freeing\n", vd);
vc->desc_free(vd);
if (async_tx_test_ack(&vd->tx)) {
list_move_tail(&vd->node, &vc->desc_allocated);
} else {
dev_dbg(vc->chan.device->dev, "txd %p: freeing\n", vd);
list_del(&vd->node);
vc->desc_free(vd);
}
}
}
EXPORT_SYMBOL_GPL(vchan_dma_desc_free_list);
......@@ -108,6 +114,7 @@ void vchan_init(struct virt_dma_chan *vc, struct dma_device *dmadev)
dma_cookie_init(&vc->chan);
spin_lock_init(&vc->lock);
INIT_LIST_HEAD(&vc->desc_allocated);
INIT_LIST_HEAD(&vc->desc_submitted);
INIT_LIST_HEAD(&vc->desc_issued);
INIT_LIST_HEAD(&vc->desc_completed);
......
......@@ -29,6 +29,7 @@ struct virt_dma_chan {
spinlock_t lock;
/* protected by vc.lock */
struct list_head desc_allocated;
struct list_head desc_submitted;
struct list_head desc_issued;
struct list_head desc_completed;
......@@ -55,11 +56,16 @@ static inline struct dma_async_tx_descriptor *vchan_tx_prep(struct virt_dma_chan
struct virt_dma_desc *vd, unsigned long tx_flags)
{
extern dma_cookie_t vchan_tx_submit(struct dma_async_tx_descriptor *);
unsigned long flags;
dma_async_tx_descriptor_init(&vd->tx, &vc->chan);
vd->tx.flags = tx_flags;
vd->tx.tx_submit = vchan_tx_submit;
spin_lock_irqsave(&vc->lock, flags);
list_add_tail(&vd->node, &vc->desc_allocated);
spin_unlock_irqrestore(&vc->lock, flags);
return &vd->tx;
}
......@@ -122,7 +128,8 @@ static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc)
}
/**
* vchan_get_all_descriptors - obtain all submitted and issued descriptors
* vchan_get_all_descriptors - obtain all allocated, submitted and issued
* descriptors
* vc: virtual channel to get descriptors from
* head: list of descriptors found
*
......@@ -134,6 +141,7 @@ static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc)
static inline void vchan_get_all_descriptors(struct virt_dma_chan *vc,
struct list_head *head)
{
list_splice_tail_init(&vc->desc_allocated, head);
list_splice_tail_init(&vc->desc_submitted, head);
list_splice_tail_init(&vc->desc_issued, head);
list_splice_tail_init(&vc->desc_completed, head);
......@@ -141,11 +149,14 @@ static inline void vchan_get_all_descriptors(struct virt_dma_chan *vc,
static inline void vchan_free_chan_resources(struct virt_dma_chan *vc)
{
struct virt_dma_desc *vd;
unsigned long flags;
LIST_HEAD(head);
spin_lock_irqsave(&vc->lock, flags);
vchan_get_all_descriptors(vc, &head);
list_for_each_entry(vd, &head, node)
async_tx_clear_ack(&vd->tx);
spin_unlock_irqrestore(&vc->lock, flags);
vchan_dma_desc_free_list(vc, &head);
......
......@@ -124,32 +124,8 @@
#define XGENE_DMA_DESC_ELERR_POS 46
#define XGENE_DMA_DESC_RTYPE_POS 56
#define XGENE_DMA_DESC_LERR_POS 60
#define XGENE_DMA_DESC_FLYBY_POS 4
#define XGENE_DMA_DESC_BUFLEN_POS 48
#define XGENE_DMA_DESC_HOENQ_NUM_POS 48
#define XGENE_DMA_DESC_NV_SET(m) \
(((u64 *)(m))[0] |= XGENE_DMA_DESC_NV_BIT)
#define XGENE_DMA_DESC_IN_SET(m) \
(((u64 *)(m))[0] |= XGENE_DMA_DESC_IN_BIT)
#define XGENE_DMA_DESC_RTYPE_SET(m, v) \
(((u64 *)(m))[0] |= ((u64)(v) << XGENE_DMA_DESC_RTYPE_POS))
#define XGENE_DMA_DESC_BUFADDR_SET(m, v) \
(((u64 *)(m))[0] |= (v))
#define XGENE_DMA_DESC_BUFLEN_SET(m, v) \
(((u64 *)(m))[0] |= ((u64)(v) << XGENE_DMA_DESC_BUFLEN_POS))
#define XGENE_DMA_DESC_C_SET(m) \
(((u64 *)(m))[1] |= XGENE_DMA_DESC_C_BIT)
#define XGENE_DMA_DESC_FLYBY_SET(m, v) \
(((u64 *)(m))[2] |= ((v) << XGENE_DMA_DESC_FLYBY_POS))
#define XGENE_DMA_DESC_MULTI_SET(m, v, i) \
(((u64 *)(m))[2] |= ((u64)(v) << (((i) + 1) * 8)))
#define XGENE_DMA_DESC_DR_SET(m) \
(((u64 *)(m))[2] |= XGENE_DMA_DESC_DR_BIT)
#define XGENE_DMA_DESC_DST_ADDR_SET(m, v) \
(((u64 *)(m))[3] |= (v))
#define XGENE_DMA_DESC_H0ENQ_NUM_SET(m, v) \
(((u64 *)(m))[3] |= ((u64)(v) << XGENE_DMA_DESC_HOENQ_NUM_POS))
#define XGENE_DMA_DESC_ELERR_RD(m) \
(((m) >> XGENE_DMA_DESC_ELERR_POS) & 0x3)
#define XGENE_DMA_DESC_LERR_RD(m) \
......@@ -158,14 +134,7 @@
(((elerr) << 4) | (lerr))
/* X-Gene DMA descriptor empty s/w signature */
#define XGENE_DMA_DESC_EMPTY_INDEX 0
#define XGENE_DMA_DESC_EMPTY_SIGNATURE ~0ULL
#define XGENE_DMA_DESC_SET_EMPTY(m) \
(((u64 *)(m))[XGENE_DMA_DESC_EMPTY_INDEX] = \
XGENE_DMA_DESC_EMPTY_SIGNATURE)
#define XGENE_DMA_DESC_IS_EMPTY(m) \
(((u64 *)(m))[XGENE_DMA_DESC_EMPTY_INDEX] == \
XGENE_DMA_DESC_EMPTY_SIGNATURE)
/* X-Gene DMA configurable parameters defines */
#define XGENE_DMA_RING_NUM 512
......@@ -184,7 +153,7 @@
#define XGENE_DMA_XOR_ALIGNMENT 6 /* 64 Bytes */
#define XGENE_DMA_MAX_XOR_SRC 5
#define XGENE_DMA_16K_BUFFER_LEN_CODE 0x0
#define XGENE_DMA_INVALID_LEN_CODE 0x7800
#define XGENE_DMA_INVALID_LEN_CODE 0x7800000000000000ULL
/* X-Gene DMA descriptor error codes */
#define ERR_DESC_AXI 0x01
......@@ -214,10 +183,10 @@
#define ERR_DESC_SRC_INT 0xB
/* X-Gene DMA flyby operation code */
#define FLYBY_2SRC_XOR 0x8
#define FLYBY_3SRC_XOR 0x9
#define FLYBY_4SRC_XOR 0xA
#define FLYBY_5SRC_XOR 0xB
#define FLYBY_2SRC_XOR 0x80
#define FLYBY_3SRC_XOR 0x90
#define FLYBY_4SRC_XOR 0xA0
#define FLYBY_5SRC_XOR 0xB0
/* X-Gene DMA SW descriptor flags */
#define XGENE_DMA_FLAG_64B_DESC BIT(0)
......@@ -238,10 +207,10 @@
dev_err(chan->dev, "%s: " fmt, chan->name, ##arg)
struct xgene_dma_desc_hw {
u64 m0;
u64 m1;
u64 m2;
u64 m3;
__le64 m0;
__le64 m1;
__le64 m2;
__le64 m3;
};
enum xgene_dma_ring_cfgsize {
......@@ -388,18 +357,11 @@ static bool is_pq_enabled(struct xgene_dma *pdma)
return !(val & XGENE_DMA_PQ_DISABLE_MASK);
}
static void xgene_dma_cpu_to_le64(u64 *desc, int count)
{
int i;
for (i = 0; i < count; i++)
desc[i] = cpu_to_le64(desc[i]);
}
static u16 xgene_dma_encode_len(u32 len)
static u64 xgene_dma_encode_len(size_t len)
{
return (len < XGENE_DMA_MAX_BYTE_CNT) ?
len : XGENE_DMA_16K_BUFFER_LEN_CODE;
((u64)len << XGENE_DMA_DESC_BUFLEN_POS) :
XGENE_DMA_16K_BUFFER_LEN_CODE;
}
static u8 xgene_dma_encode_xor_flyby(u32 src_cnt)
......@@ -424,34 +386,50 @@ static u32 xgene_dma_ring_desc_cnt(struct xgene_dma_ring *ring)
return XGENE_DMA_RING_DESC_CNT(ring_state);
}
static void xgene_dma_set_src_buffer(void *ext8, size_t *len,
static void xgene_dma_set_src_buffer(__le64 *ext8, size_t *len,
dma_addr_t *paddr)
{
size_t nbytes = (*len < XGENE_DMA_MAX_BYTE_CNT) ?
*len : XGENE_DMA_MAX_BYTE_CNT;
XGENE_DMA_DESC_BUFADDR_SET(ext8, *paddr);
XGENE_DMA_DESC_BUFLEN_SET(ext8, xgene_dma_encode_len(nbytes));
*ext8 |= cpu_to_le64(*paddr);
*ext8 |= cpu_to_le64(xgene_dma_encode_len(nbytes));
*len -= nbytes;
*paddr += nbytes;
}
static void xgene_dma_invalidate_buffer(void *ext8)
static void xgene_dma_invalidate_buffer(__le64 *ext8)
{
XGENE_DMA_DESC_BUFLEN_SET(ext8, XGENE_DMA_INVALID_LEN_CODE);
*ext8 |= cpu_to_le64(XGENE_DMA_INVALID_LEN_CODE);
}
static void *xgene_dma_lookup_ext8(u64 *desc, int idx)
static __le64 *xgene_dma_lookup_ext8(struct xgene_dma_desc_hw *desc, int idx)
{
return (idx % 2) ? (desc + idx - 1) : (desc + idx + 1);
switch (idx) {
case 0:
return &desc->m1;
case 1:
return &desc->m0;
case 2:
return &desc->m3;
case 3:
return &desc->m2;
default:
pr_err("Invalid dma descriptor index\n");
}
return NULL;
}
static void xgene_dma_init_desc(void *desc, u16 dst_ring_num)
static void xgene_dma_init_desc(struct xgene_dma_desc_hw *desc,
u16 dst_ring_num)
{
XGENE_DMA_DESC_C_SET(desc); /* Coherent IO */
XGENE_DMA_DESC_IN_SET(desc);
XGENE_DMA_DESC_H0ENQ_NUM_SET(desc, dst_ring_num);
XGENE_DMA_DESC_RTYPE_SET(desc, XGENE_DMA_RING_OWNER_DMA);
desc->m0 |= cpu_to_le64(XGENE_DMA_DESC_IN_BIT);
desc->m0 |= cpu_to_le64((u64)XGENE_DMA_RING_OWNER_DMA <<
XGENE_DMA_DESC_RTYPE_POS);
desc->m1 |= cpu_to_le64(XGENE_DMA_DESC_C_BIT);
desc->m3 |= cpu_to_le64((u64)dst_ring_num <<
XGENE_DMA_DESC_HOENQ_NUM_POS);
}
static void xgene_dma_prep_cpy_desc(struct xgene_dma_chan *chan,
......@@ -459,7 +437,7 @@ static void xgene_dma_prep_cpy_desc(struct xgene_dma_chan *chan,
dma_addr_t dst, dma_addr_t src,
size_t len)
{
void *desc1, *desc2;
struct xgene_dma_desc_hw *desc1, *desc2;
int i;
/* Get 1st descriptor */
......@@ -467,23 +445,21 @@ static void xgene_dma_prep_cpy_desc(struct xgene_dma_chan *chan,
xgene_dma_init_desc(desc1, chan->tx_ring.dst_ring_num);
/* Set destination address */
XGENE_DMA_DESC_DR_SET(desc1);
XGENE_DMA_DESC_DST_ADDR_SET(desc1, dst);
desc1->m2 |= cpu_to_le64(XGENE_DMA_DESC_DR_BIT);
desc1->m3 |= cpu_to_le64(dst);
/* Set 1st source address */
xgene_dma_set_src_buffer(desc1 + 8, &len, &src);
xgene_dma_set_src_buffer(&desc1->m1, &len, &src);
if (len <= 0) {
desc2 = NULL;
goto skip_additional_src;
}
if (!len)
return;
/*
* We need to split this source buffer,
* and need to use 2nd descriptor
*/
desc2 = &desc_sw->desc2;
XGENE_DMA_DESC_NV_SET(desc1);
desc1->m0 |= cpu_to_le64(XGENE_DMA_DESC_NV_BIT);
/* Set 2nd to 5th source address */
for (i = 0; i < 4 && len; i++)
......@@ -496,12 +472,6 @@ static void xgene_dma_prep_cpy_desc(struct xgene_dma_chan *chan,
/* Updated flag that we have prepared 64B descriptor */
desc_sw->flags |= XGENE_DMA_FLAG_64B_DESC;
skip_additional_src:
/* Hardware stores descriptor in little endian format */
xgene_dma_cpu_to_le64(desc1, 4);
if (desc2)
xgene_dma_cpu_to_le64(desc2, 4);
}
static void xgene_dma_prep_xor_desc(struct xgene_dma_chan *chan,
......@@ -510,7 +480,7 @@ static void xgene_dma_prep_xor_desc(struct xgene_dma_chan *chan,
u32 src_cnt, size_t *nbytes,
const u8 *scf)
{
void *desc1, *desc2;
struct xgene_dma_desc_hw *desc1, *desc2;
size_t len = *nbytes;
int i;
......@@ -521,28 +491,24 @@ static void xgene_dma_prep_xor_desc(struct xgene_dma_chan *chan,
xgene_dma_init_desc(desc1, chan->tx_ring.dst_ring_num);
/* Set destination address */
XGENE_DMA_DESC_DR_SET(desc1);
XGENE_DMA_DESC_DST_ADDR_SET(desc1, *dst);
desc1->m2 |= cpu_to_le64(XGENE_DMA_DESC_DR_BIT);
desc1->m3 |= cpu_to_le64(*dst);
/* We have multiple source addresses, so need to set NV bit*/
XGENE_DMA_DESC_NV_SET(desc1);
desc1->m0 |= cpu_to_le64(XGENE_DMA_DESC_NV_BIT);
/* Set flyby opcode */
XGENE_DMA_DESC_FLYBY_SET(desc1, xgene_dma_encode_xor_flyby(src_cnt));
desc1->m2 |= cpu_to_le64(xgene_dma_encode_xor_flyby(src_cnt));
/* Set 1st to 5th source addresses */
for (i = 0; i < src_cnt; i++) {
len = *nbytes;
xgene_dma_set_src_buffer((i == 0) ? (desc1 + 8) :
xgene_dma_set_src_buffer((i == 0) ? &desc1->m1 :
xgene_dma_lookup_ext8(desc2, i - 1),
&len, &src[i]);
XGENE_DMA_DESC_MULTI_SET(desc1, scf[i], i);
desc1->m2 |= cpu_to_le64((scf[i] << ((i + 1) * 8)));
}
/* Hardware stores descriptor in little endian format */
xgene_dma_cpu_to_le64(desc1, 4);
xgene_dma_cpu_to_le64(desc2, 4);
/* Update meta data */
*nbytes = len;
*dst += XGENE_DMA_MAX_BYTE_CNT;
......@@ -738,7 +704,7 @@ static int xgene_chan_xfer_request(struct xgene_dma_ring *ring,
* xgene_chan_xfer_ld_pending - push any pending transactions to hw
* @chan : X-Gene DMA channel
*
* LOCKING: must hold chan->desc_lock
* LOCKING: must hold chan->lock
*/
static void xgene_chan_xfer_ld_pending(struct xgene_dma_chan *chan)
{
......@@ -808,7 +774,8 @@ static void xgene_dma_cleanup_descriptors(struct xgene_dma_chan *chan)
desc_hw = &ring->desc_hw[ring->head];
/* Check if this descriptor has been completed */
if (unlikely(XGENE_DMA_DESC_IS_EMPTY(desc_hw)))
if (unlikely(le64_to_cpu(desc_hw->m0) ==
XGENE_DMA_DESC_EMPTY_SIGNATURE))
break;
if (++ring->head == ring->slots)
......@@ -842,7 +809,7 @@ static void xgene_dma_cleanup_descriptors(struct xgene_dma_chan *chan)
iowrite32(-1, ring->cmd);
/* Mark this hw descriptor as processed */
XGENE_DMA_DESC_SET_EMPTY(desc_hw);
desc_hw->m0 = cpu_to_le64(XGENE_DMA_DESC_EMPTY_SIGNATURE);
xgene_dma_run_tx_complete_actions(chan, desc_sw);
......@@ -889,7 +856,7 @@ static int xgene_dma_alloc_chan_resources(struct dma_chan *dchan)
* @chan: X-Gene DMA channel
* @list: the list to free
*
* LOCKING: must hold chan->desc_lock
* LOCKING: must hold chan->lock
*/
static void xgene_dma_free_desc_list(struct xgene_dma_chan *chan,
struct list_head *list)
......@@ -900,15 +867,6 @@ static void xgene_dma_free_desc_list(struct xgene_dma_chan *chan,
xgene_dma_clean_descriptor(chan, desc);
}
static void xgene_dma_free_tx_desc_list(struct xgene_dma_chan *chan,
struct list_head *list)
{
struct xgene_dma_desc_sw *desc, *_desc;
list_for_each_entry_safe(desc, _desc, list, node)
xgene_dma_clean_descriptor(chan, desc);
}
static void xgene_dma_free_chan_resources(struct dma_chan *dchan)
{
struct xgene_dma_chan *chan = to_dma_chan(dchan);
......@@ -985,7 +943,7 @@ static struct dma_async_tx_descriptor *xgene_dma_prep_memcpy(
if (!first)
return NULL;
xgene_dma_free_tx_desc_list(chan, &first->tx_list);
xgene_dma_free_desc_list(chan, &first->tx_list);
return NULL;
}
......@@ -1093,7 +1051,7 @@ static struct dma_async_tx_descriptor *xgene_dma_prep_sg(
if (!first)
return NULL;
xgene_dma_free_tx_desc_list(chan, &first->tx_list);
xgene_dma_free_desc_list(chan, &first->tx_list);
return NULL;
}
......@@ -1141,7 +1099,7 @@ static struct dma_async_tx_descriptor *xgene_dma_prep_xor(
if (!first)
return NULL;
xgene_dma_free_tx_desc_list(chan, &first->tx_list);
xgene_dma_free_desc_list(chan, &first->tx_list);
return NULL;
}
......@@ -1218,7 +1176,7 @@ static struct dma_async_tx_descriptor *xgene_dma_prep_pq(
if (!first)
return NULL;
xgene_dma_free_tx_desc_list(chan, &first->tx_list);
xgene_dma_free_desc_list(chan, &first->tx_list);
return NULL;
}
......@@ -1316,7 +1274,6 @@ static void xgene_dma_setup_ring(struct xgene_dma_ring *ring)
{
void *ring_cfg = ring->state;
u64 addr = ring->desc_paddr;
void *desc;
u32 i, val;
ring->slots = ring->size / XGENE_DMA_RING_WQ_DESC_SIZE;
......@@ -1358,8 +1315,10 @@ static void xgene_dma_setup_ring(struct xgene_dma_ring *ring)
/* Set empty signature to DMA Rx ring descriptors */
for (i = 0; i < ring->slots; i++) {
struct xgene_dma_desc_hw *desc;
desc = &ring->desc_hw[i];
XGENE_DMA_DESC_SET_EMPTY(desc);
desc->m0 = cpu_to_le64(XGENE_DMA_DESC_EMPTY_SIGNATURE);
}
/* Enable DMA Rx ring interrupt */
......
#ifndef _PXA_DMA_H_
#define _PXA_DMA_H_
enum pxad_chan_prio {
PXAD_PRIO_HIGHEST = 0,
PXAD_PRIO_NORMAL,
PXAD_PRIO_LOW,
PXAD_PRIO_LOWEST,
};
struct pxad_param {
unsigned int drcmr;
enum pxad_chan_prio prio;
};
struct dma_chan;
#ifdef CONFIG_PXA_DMA
bool pxad_filter_fn(struct dma_chan *chan, void *param);
#else
static inline bool pxad_filter_fn(struct dma_chan *chan, void *param)
{
return false;
}
#endif
#endif /* _PXA_DMA_H_ */
此差异已折叠。
......@@ -23,6 +23,9 @@ struct of_dma {
struct device_node *of_node;
struct dma_chan *(*of_dma_xlate)
(struct of_phandle_args *, struct of_dma *);
void *(*of_dma_route_allocate)
(struct of_phandle_args *, struct of_dma *);
struct dma_router *dma_router;
void *of_dma_data;
};
......@@ -37,12 +40,20 @@ extern int of_dma_controller_register(struct device_node *np,
(struct of_phandle_args *, struct of_dma *),
void *data);
extern void of_dma_controller_free(struct device_node *np);
extern int of_dma_router_register(struct device_node *np,
void *(*of_dma_route_allocate)
(struct of_phandle_args *, struct of_dma *),
struct dma_router *dma_router);
#define of_dma_router_free of_dma_controller_free
extern struct dma_chan *of_dma_request_slave_channel(struct device_node *np,
const char *name);
extern struct dma_chan *of_dma_simple_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma);
extern struct dma_chan *of_dma_xlate_by_chan_id(struct of_phandle_args *dma_spec,
struct of_dma *ofdma);
#else
static inline int of_dma_controller_register(struct device_node *np,
struct dma_chan *(*of_dma_xlate)
......@@ -56,6 +67,16 @@ static inline void of_dma_controller_free(struct device_node *np)
{
}
static inline int of_dma_router_register(struct device_node *np,
void *(*of_dma_route_allocate)
(struct of_phandle_args *, struct of_dma *),
struct dma_router *dma_router)
{
return -ENODEV;
}
#define of_dma_router_free of_dma_controller_free
static inline struct dma_chan *of_dma_request_slave_channel(struct device_node *np,
const char *name)
{
......
/*
* This is for Renesas R-Car Audio-DMAC-peri-peri.
*
* Copyright (C) 2014 Renesas Electronics Corporation
* Copyright (C) 2014 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
*
* This file is based on the include/linux/sh_dma.h
*
* Header for the new SH dmaengine driver
*
* Copyright (C) 2010 Guennadi Liakhovetski <g.liakhovetski@gmx.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef SH_AUDMAPP_H
#define SH_AUDMAPP_H
#include <linux/dmaengine.h>
struct audmapp_slave_config {
int slave_id;
dma_addr_t src;
dma_addr_t dst;
u32 chcr;
};
struct audmapp_pdata {
struct audmapp_slave_config *slave;
int slave_num;
};
#endif /* SH_AUDMAPP_H */
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册