提交 ce1d3fde 编写于 作者: L Linus Torvalds

Merge branch 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This update brings:

   - the big cleanup up by Maxime for device control and slave
     capabilities.  This makes the API much cleaner.

   - new IMG MDC driver by Andrew

   - new Renesas R-Car Gen2 DMA Controller driver by Laurent along with
     bunch of fixes on rcar drivers

   - odd fixes and updates spread over driver"

* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (130 commits)
  dmaengine: pl330: add DMA_PAUSE feature
  dmaengine: pl330: improve pl330_tx_status() function
  dmaengine: rcar-dmac: Disable channel 0 when using IOMMU
  dmaengine: rcar-dmac: Work around descriptor mode IOMMU errata
  dmaengine: rcar-dmac: Allocate hardware descriptors with DMAC device
  dmaengine: rcar-dmac: Fix oops due to unintialized list in error ISR
  dmaengine: rcar-dmac: Fix spinlock issues in interrupt
  dmaenegine: edma: fix sparse warnings
  dmaengine: rcar-dmac: Fix uninitialized variable usage
  dmaengine: shdmac: extend PM methods
  dmaengine: shdmac: use SET_RUNTIME_PM_OPS()
  dmaengine: pl330: fix bug that cause start the same descs in cyclic
  dmaengine: at_xdmac: allow muliple dwidths when doing slave transfers
  dmaengine: at_xdmac: simplify channel configuration stuff
  dmaengine: at_xdmac: introduce save_cc field
  dmaengine: at_xdmac: wait for in-progress transaction to complete after pausing a channel
  ioat: fail self-test if wait_for_completion times out
  dmaengine: dw: define DW_DMA_MAX_NR_MASTERS
  dmaengine: dw: amend description of dma_dev field
  dmatest: move src_off, dst_off, len inside loop
  ...
* IMG Multi-threaded DMA Controller (MDC)
Required properties:
- compatible: Must be "img,pistachio-mdc-dma".
- reg: Must contain the base address and length of the MDC registers.
- interrupts: Must contain all the per-channel DMA interrupts.
- clocks: Must contain an entry for each entry in clock-names.
See ../clock/clock-bindings.txt for details.
- clock-names: Must include the following entries:
- sys: MDC system interface clock.
- img,cr-periph: Must contain a phandle to the peripheral control syscon
node which contains the DMA request to channel mapping registers.
- img,max-burst-multiplier: Must be the maximum supported burst size multiplier.
The maximum burst size is this value multiplied by the hardware-reported bus
width.
- #dma-cells: Must be 3:
- The first cell is the peripheral's DMA request line.
- The second cell is a bitmap specifying to which channels the DMA request
line may be mapped (i.e. bit N set indicates channel N is usable).
- The third cell is the thread ID to be used by the channel.
Optional properties:
- dma-channels: Number of supported DMA channels, up to 32. If not specified
the number reported by the hardware is used.
Example:
mdc: dma-controller@18143000 {
compatible = "img,pistachio-mdc-dma";
reg = <0x18143000 0x1000>;
interrupts = <GIC_SHARED 27 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SHARED 28 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SHARED 29 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SHARED 30 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SHARED 31 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SHARED 32 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SHARED 33 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SHARED 34 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SHARED 35 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SHARED 36 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SHARED 37 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SHARED 38 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&system_clk>;
clock-names = "sys";
img,max-burst-multiplier = <16>;
img,cr-periph = <&cr_periph>;
#dma-cells = <3>;
};
spi@18100f00 {
...
dmas = <&mdc 9 0xffffffff 0>, <&mdc 10 0xffffffff 0>;
dma-names = "tx", "rx";
...
};
......@@ -5,9 +5,6 @@ controller instances named DMAC capable of serving multiple clients. Channels
can be dedicated to specific clients or shared between a large number of
clients.
DMA clients are connected to the DMAC ports referenced by an 8-bit identifier
called MID/RID.
Each DMA client is connected to one dedicated port of the DMAC, identified by
an 8-bit port number called the MID/RID. A DMA controller can thus serve up to
256 clients in total. When the number of hardware channels is lower than the
......
......@@ -38,7 +38,7 @@ Example:
chan_allocation_order = <1>;
chan_priority = <1>;
block_size = <0xfff>;
data_width = <3 3 0 0>;
data_width = <3 3>;
};
DMA clients connected to the Designware DMA controller must use the format
......
......@@ -113,6 +113,31 @@ need to initialize a few fields in there:
* channels: should be initialized as a list using the
INIT_LIST_HEAD macro for example
* src_addr_widths:
- should contain a bitmask of the supported source transfer width
* dst_addr_widths:
- should contain a bitmask of the supported destination transfer
width
* directions:
- should contain a bitmask of the supported slave directions
(i.e. excluding mem2mem transfers)
* residue_granularity:
- Granularity of the transfer residue reported to dma_set_residue.
- This can be either:
+ Descriptor
-> Your device doesn't support any kind of residue
reporting. The framework will only know that a particular
transaction descriptor is done.
+ Segment
-> Your device is able to report which chunks have been
transferred
+ Burst
-> Your device is able to report which burst have been
transferred
* dev: should hold the pointer to the struct device associated
to your current driver instance.
......@@ -274,48 +299,36 @@ supported.
account the current period.
- This function can be called in an interrupt context.
* device_control
- Used by client drivers to control and configure the channel it
has a handle on.
- Called with a command and an argument
+ The command is one of the values listed by the enum
dma_ctrl_cmd. The valid commands are:
+ DMA_PAUSE
+ Pauses a transfer on the channel
+ This command should operate synchronously on the channel,
pausing right away the work of the given channel
+ DMA_RESUME
+ Restarts a transfer on the channel
+ This command should operate synchronously on the channel,
resuming right away the work of the given channel
+ DMA_TERMINATE_ALL
+ Aborts all the pending and ongoing transfers on the
channel
+ This command should operate synchronously on the channel,
terminating right away all the channels
+ DMA_SLAVE_CONFIG
+ Reconfigures the channel with passed configuration
+ This command should NOT perform synchronously, or on any
currently queued transfers, but only on subsequent ones
+ In this case, the function will receive a
dma_slave_config structure pointer as an argument, that
will detail which configuration to use.
+ Even though that structure contains a direction field,
this field is deprecated in favor of the direction
argument given to the prep_* functions
+ FSLDMA_EXTERNAL_START
+ TODO: Why does that even exist?
+ The argument is an opaque unsigned long. This actually is a
pointer to a struct dma_slave_config that should be used only
in the DMA_SLAVE_CONFIG.
* device_slave_caps
- Called through the framework by client drivers in order to have
an idea of what are the properties of the channel allocated to
them.
- Such properties are the buswidth, available directions, etc.
- Required for every generic layer doing DMA transfers, such as
ASoC.
* device_config
- Reconfigures the channel with the configuration given as
argument
- This command should NOT perform synchronously, or on any
currently queued transfers, but only on subsequent ones
- In this case, the function will receive a dma_slave_config
structure pointer as an argument, that will detail which
configuration to use.
- Even though that structure contains a direction field, this
field is deprecated in favor of the direction argument given to
the prep_* functions
- This call is mandatory for slave operations only. This should NOT be
set or expected to be set for memcpy operations.
If a driver support both, it should use this call for slave
operations only and not for memcpy ones.
* device_pause
- Pauses a transfer on the channel
- This command should operate synchronously on the channel,
pausing right away the work of the given channel
* device_resume
- Resumes a transfer on the channel
- This command should operate synchronously on the channel,
pausing right away the work of the given channel
* device_terminate_all
- Aborts all the pending and ongoing transfers on the channel
- This command should operate synchronously on the channel,
terminating right away all the channels
Misc notes (stuff that should be documented, but don't really know
where to put them)
......
......@@ -8503,6 +8503,7 @@ SYNOPSYS DESIGNWARE DMAC DRIVER
M: Viresh Kumar <viresh.linux@gmail.com>
M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
S: Maintained
F: include/linux/dma/dw.h
F: include/linux/platform_data/dma-dw.h
F: drivers/dma/dw/
......
......@@ -112,7 +112,7 @@
chan_allocation_order = <0>;
chan_priority = <1>;
block_size = <0x7ff>;
data_width = <2 0 0 0>;
data_width = <2>;
clocks = <&ahb_clk>;
clock-names = "hclk";
};
......
......@@ -117,7 +117,7 @@
chan_priority = <1>;
block_size = <0xfff>;
dma-masters = <2>;
data_width = <3 3 0 0>;
data_width = <3 3>;
};
dma@eb000000 {
......@@ -133,7 +133,7 @@
chan_allocation_order = <1>;
chan_priority = <1>;
block_size = <0xfff>;
data_width = <3 3 0 0>;
data_width = <3 3>;
};
fsmc: flash@b0000000 {
......
......@@ -607,7 +607,7 @@ static struct dw_dma_platform_data dw_dmac0_data = {
.nr_channels = 3,
.block_size = 4095U,
.nr_masters = 2,
.data_width = { 2, 2, 0, 0 },
.data_width = { 2, 2 },
};
static struct resource dw_dmac0_resource[] = {
......
......@@ -606,12 +606,12 @@ static void cryp_dma_done(struct cryp_ctx *ctx)
dev_dbg(ctx->device->dev, "[%s]: ", __func__);
chan = ctx->device->dma.chan_mem2cryp;
dmaengine_device_control(chan, DMA_TERMINATE_ALL, 0);
dmaengine_terminate_all(chan);
dma_unmap_sg(chan->device->dev, ctx->device->dma.sg_src,
ctx->device->dma.sg_src_len, DMA_TO_DEVICE);
chan = ctx->device->dma.chan_cryp2mem;
dmaengine_device_control(chan, DMA_TERMINATE_ALL, 0);
dmaengine_terminate_all(chan);
dma_unmap_sg(chan->device->dev, ctx->device->dma.sg_dst,
ctx->device->dma.sg_dst_len, DMA_FROM_DEVICE);
}
......
......@@ -202,7 +202,7 @@ static void hash_dma_done(struct hash_ctx *ctx)
struct dma_chan *chan;
chan = ctx->device->dma.chan_mem2hash;
dmaengine_device_control(chan, DMA_TERMINATE_ALL, 0);
dmaengine_terminate_all(chan);
dma_unmap_sg(chan->device->dev, ctx->device->dma.sg,
ctx->device->dma.sg_len, DMA_TO_DEVICE);
}
......
......@@ -416,6 +416,15 @@ config NBPFAXI_DMA
help
Support for "Type-AXI" NBPF DMA IPs from Renesas
config IMG_MDC_DMA
tristate "IMG MDC support"
depends on MIPS || COMPILE_TEST
depends on MFD_SYSCON
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Enable support for the IMG multi-threaded DMA controller (MDC).
config DMA_ENGINE
bool
......
......@@ -19,7 +19,7 @@ obj-$(CONFIG_AT_HDMAC) += at_hdmac.o
obj-$(CONFIG_AT_XDMAC) += at_xdmac.o
obj-$(CONFIG_MX3_IPU) += ipu/
obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o
obj-$(CONFIG_SH_DMAE_BASE) += sh/
obj-$(CONFIG_RENESAS_DMA) += sh/
obj-$(CONFIG_COH901318) += coh901318.o coh901318_lli.o
obj-$(CONFIG_AMCC_PPC440SPE_ADMA) += ppc4xx/
obj-$(CONFIG_IMX_SDMA) += imx-sdma.o
......@@ -50,3 +50,4 @@ obj-y += xilinx/
obj-$(CONFIG_INTEL_MIC_X100_DMA) += mic_x100_dma.o
obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o
obj-$(CONFIG_DMA_SUN6I) += sun6i-dma.o
obj-$(CONFIG_IMG_MDC_DMA) += img-mdc-dma.o
......@@ -1386,32 +1386,6 @@ static u32 pl08x_get_cctl(struct pl08x_dma_chan *plchan,
return pl08x_cctl(cctl);
}
static int dma_set_runtime_config(struct dma_chan *chan,
struct dma_slave_config *config)
{
struct pl08x_dma_chan *plchan = to_pl08x_chan(chan);
struct pl08x_driver_data *pl08x = plchan->host;
if (!plchan->slave)
return -EINVAL;
/* Reject definitely invalid configurations */
if (config->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES ||
config->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
return -EINVAL;
if (config->device_fc && pl08x->vd->pl080s) {
dev_err(&pl08x->adev->dev,
"%s: PL080S does not support peripheral flow control\n",
__func__);
return -EINVAL;
}
plchan->cfg = *config;
return 0;
}
/*
* Slave transactions callback to the slave device to allow
* synchronization of slave DMA signals with the DMAC enable
......@@ -1693,20 +1667,71 @@ static struct dma_async_tx_descriptor *pl08x_prep_dma_cyclic(
return vchan_tx_prep(&plchan->vc, &txd->vd, flags);
}
static int pl08x_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
static int pl08x_config(struct dma_chan *chan,
struct dma_slave_config *config)
{
struct pl08x_dma_chan *plchan = to_pl08x_chan(chan);
struct pl08x_driver_data *pl08x = plchan->host;
if (!plchan->slave)
return -EINVAL;
/* Reject definitely invalid configurations */
if (config->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES ||
config->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
return -EINVAL;
if (config->device_fc && pl08x->vd->pl080s) {
dev_err(&pl08x->adev->dev,
"%s: PL080S does not support peripheral flow control\n",
__func__);
return -EINVAL;
}
plchan->cfg = *config;
return 0;
}
static int pl08x_terminate_all(struct dma_chan *chan)
{
struct pl08x_dma_chan *plchan = to_pl08x_chan(chan);
struct pl08x_driver_data *pl08x = plchan->host;
unsigned long flags;
int ret = 0;
/* Controls applicable to inactive channels */
if (cmd == DMA_SLAVE_CONFIG) {
return dma_set_runtime_config(chan,
(struct dma_slave_config *)arg);
spin_lock_irqsave(&plchan->vc.lock, flags);
if (!plchan->phychan && !plchan->at) {
spin_unlock_irqrestore(&plchan->vc.lock, flags);
return 0;
}
plchan->state = PL08X_CHAN_IDLE;
if (plchan->phychan) {
/*
* Mark physical channel as free and free any slave
* signal
*/
pl08x_phy_free(plchan);
}
/* Dequeue jobs and free LLIs */
if (plchan->at) {
pl08x_desc_free(&plchan->at->vd);
plchan->at = NULL;
}
/* Dequeue jobs not yet fired as well */
pl08x_free_txd_list(pl08x, plchan);
spin_unlock_irqrestore(&plchan->vc.lock, flags);
return 0;
}
static int pl08x_pause(struct dma_chan *chan)
{
struct pl08x_dma_chan *plchan = to_pl08x_chan(chan);
unsigned long flags;
/*
* Anything succeeds on channels with no physical allocation and
* no queued transfers.
......@@ -1717,42 +1742,35 @@ static int pl08x_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
return 0;
}
switch (cmd) {
case DMA_TERMINATE_ALL:
plchan->state = PL08X_CHAN_IDLE;
pl08x_pause_phy_chan(plchan->phychan);
plchan->state = PL08X_CHAN_PAUSED;
if (plchan->phychan) {
/*
* Mark physical channel as free and free any slave
* signal
*/
pl08x_phy_free(plchan);
}
/* Dequeue jobs and free LLIs */
if (plchan->at) {
pl08x_desc_free(&plchan->at->vd);
plchan->at = NULL;
}
/* Dequeue jobs not yet fired as well */
pl08x_free_txd_list(pl08x, plchan);
break;
case DMA_PAUSE:
pl08x_pause_phy_chan(plchan->phychan);
plchan->state = PL08X_CHAN_PAUSED;
break;
case DMA_RESUME:
pl08x_resume_phy_chan(plchan->phychan);
plchan->state = PL08X_CHAN_RUNNING;
break;
default:
/* Unknown command */
ret = -ENXIO;
break;
spin_unlock_irqrestore(&plchan->vc.lock, flags);
return 0;
}
static int pl08x_resume(struct dma_chan *chan)
{
struct pl08x_dma_chan *plchan = to_pl08x_chan(chan);
unsigned long flags;
/*
* Anything succeeds on channels with no physical allocation and
* no queued transfers.
*/
spin_lock_irqsave(&plchan->vc.lock, flags);
if (!plchan->phychan && !plchan->at) {
spin_unlock_irqrestore(&plchan->vc.lock, flags);
return 0;
}
pl08x_resume_phy_chan(plchan->phychan);
plchan->state = PL08X_CHAN_RUNNING;
spin_unlock_irqrestore(&plchan->vc.lock, flags);
return ret;
return 0;
}
bool pl08x_filter_id(struct dma_chan *chan, void *chan_id)
......@@ -2048,7 +2066,10 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
pl08x->memcpy.device_prep_dma_interrupt = pl08x_prep_dma_interrupt;
pl08x->memcpy.device_tx_status = pl08x_dma_tx_status;
pl08x->memcpy.device_issue_pending = pl08x_issue_pending;
pl08x->memcpy.device_control = pl08x_control;
pl08x->memcpy.device_config = pl08x_config;
pl08x->memcpy.device_pause = pl08x_pause;
pl08x->memcpy.device_resume = pl08x_resume;
pl08x->memcpy.device_terminate_all = pl08x_terminate_all;
/* Initialize slave engine */
dma_cap_set(DMA_SLAVE, pl08x->slave.cap_mask);
......@@ -2061,7 +2082,10 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
pl08x->slave.device_issue_pending = pl08x_issue_pending;
pl08x->slave.device_prep_slave_sg = pl08x_prep_slave_sg;
pl08x->slave.device_prep_dma_cyclic = pl08x_prep_dma_cyclic;
pl08x->slave.device_control = pl08x_control;
pl08x->slave.device_config = pl08x_config;
pl08x->slave.device_pause = pl08x_pause;
pl08x->slave.device_resume = pl08x_resume;
pl08x->slave.device_terminate_all = pl08x_terminate_all;
/* Get the platform data */
pl08x->pd = dev_get_platdata(&adev->dev);
......
......@@ -42,6 +42,11 @@
#define ATC_DEFAULT_CFG (ATC_FIFOCFG_HALFFIFO)
#define ATC_DEFAULT_CTRLB (ATC_SIF(AT_DMA_MEM_IF) \
|ATC_DIF(AT_DMA_MEM_IF))
#define ATC_DMA_BUSWIDTHS\
(BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) |\
BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |\
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |\
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES))
/*
* Initial number of descriptors to allocate for each channel. This could
......@@ -972,11 +977,13 @@ atc_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
return NULL;
}
static int set_runtime_config(struct dma_chan *chan,
struct dma_slave_config *sconfig)
static int atc_config(struct dma_chan *chan,
struct dma_slave_config *sconfig)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
dev_vdbg(chan2dev(chan), "%s\n", __func__);
/* Check if it is chan is configured for slave transfers */
if (!chan->private)
return -EINVAL;
......@@ -989,9 +996,28 @@ static int set_runtime_config(struct dma_chan *chan,
return 0;
}
static int atc_pause(struct dma_chan *chan)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
struct at_dma *atdma = to_at_dma(chan->device);
int chan_id = atchan->chan_common.chan_id;
unsigned long flags;
static int atc_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
LIST_HEAD(list);
dev_vdbg(chan2dev(chan), "%s\n", __func__);
spin_lock_irqsave(&atchan->lock, flags);
dma_writel(atdma, CHER, AT_DMA_SUSP(chan_id));
set_bit(ATC_IS_PAUSED, &atchan->status);
spin_unlock_irqrestore(&atchan->lock, flags);
return 0;
}
static int atc_resume(struct dma_chan *chan)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
struct at_dma *atdma = to_at_dma(chan->device);
......@@ -1000,60 +1026,61 @@ static int atc_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
LIST_HEAD(list);
dev_vdbg(chan2dev(chan), "atc_control (%d)\n", cmd);
dev_vdbg(chan2dev(chan), "%s\n", __func__);
if (cmd == DMA_PAUSE) {
spin_lock_irqsave(&atchan->lock, flags);
if (!atc_chan_is_paused(atchan))
return 0;
dma_writel(atdma, CHER, AT_DMA_SUSP(chan_id));
set_bit(ATC_IS_PAUSED, &atchan->status);
spin_lock_irqsave(&atchan->lock, flags);
spin_unlock_irqrestore(&atchan->lock, flags);
} else if (cmd == DMA_RESUME) {
if (!atc_chan_is_paused(atchan))
return 0;
dma_writel(atdma, CHDR, AT_DMA_RES(chan_id));
clear_bit(ATC_IS_PAUSED, &atchan->status);
spin_lock_irqsave(&atchan->lock, flags);
spin_unlock_irqrestore(&atchan->lock, flags);
dma_writel(atdma, CHDR, AT_DMA_RES(chan_id));
clear_bit(ATC_IS_PAUSED, &atchan->status);
return 0;
}
spin_unlock_irqrestore(&atchan->lock, flags);
} else if (cmd == DMA_TERMINATE_ALL) {
struct at_desc *desc, *_desc;
/*
* This is only called when something went wrong elsewhere, so
* we don't really care about the data. Just disable the
* channel. We still have to poll the channel enable bit due
* to AHB/HSB limitations.
*/
spin_lock_irqsave(&atchan->lock, flags);
static int atc_terminate_all(struct dma_chan *chan)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
struct at_dma *atdma = to_at_dma(chan->device);
int chan_id = atchan->chan_common.chan_id;
struct at_desc *desc, *_desc;
unsigned long flags;
/* disabling channel: must also remove suspend state */
dma_writel(atdma, CHDR, AT_DMA_RES(chan_id) | atchan->mask);
LIST_HEAD(list);
/* confirm that this channel is disabled */
while (dma_readl(atdma, CHSR) & atchan->mask)
cpu_relax();
dev_vdbg(chan2dev(chan), "%s\n", __func__);
/* active_list entries will end up before queued entries */
list_splice_init(&atchan->queue, &list);
list_splice_init(&atchan->active_list, &list);
/*
* This is only called when something went wrong elsewhere, so
* we don't really care about the data. Just disable the
* channel. We still have to poll the channel enable bit due
* to AHB/HSB limitations.
*/
spin_lock_irqsave(&atchan->lock, flags);
/* Flush all pending and queued descriptors */
list_for_each_entry_safe(desc, _desc, &list, desc_node)
atc_chain_complete(atchan, desc);
/* disabling channel: must also remove suspend state */
dma_writel(atdma, CHDR, AT_DMA_RES(chan_id) | atchan->mask);
clear_bit(ATC_IS_PAUSED, &atchan->status);
/* if channel dedicated to cyclic operations, free it */
clear_bit(ATC_IS_CYCLIC, &atchan->status);
/* confirm that this channel is disabled */
while (dma_readl(atdma, CHSR) & atchan->mask)
cpu_relax();
spin_unlock_irqrestore(&atchan->lock, flags);
} else if (cmd == DMA_SLAVE_CONFIG) {
return set_runtime_config(chan, (struct dma_slave_config *)arg);
} else {
return -ENXIO;
}
/* active_list entries will end up before queued entries */
list_splice_init(&atchan->queue, &list);
list_splice_init(&atchan->active_list, &list);
/* Flush all pending and queued descriptors */
list_for_each_entry_safe(desc, _desc, &list, desc_node)
atc_chain_complete(atchan, desc);
clear_bit(ATC_IS_PAUSED, &atchan->status);
/* if channel dedicated to cyclic operations, free it */
clear_bit(ATC_IS_CYCLIC, &atchan->status);
spin_unlock_irqrestore(&atchan->lock, flags);
return 0;
}
......@@ -1505,7 +1532,14 @@ static int __init at_dma_probe(struct platform_device *pdev)
/* controller can do slave DMA: can trigger cyclic transfers */
dma_cap_set(DMA_CYCLIC, atdma->dma_common.cap_mask);
atdma->dma_common.device_prep_dma_cyclic = atc_prep_dma_cyclic;
atdma->dma_common.device_control = atc_control;
atdma->dma_common.device_config = atc_config;
atdma->dma_common.device_pause = atc_pause;
atdma->dma_common.device_resume = atc_resume;
atdma->dma_common.device_terminate_all = atc_terminate_all;
atdma->dma_common.src_addr_widths = ATC_DMA_BUSWIDTHS;
atdma->dma_common.dst_addr_widths = ATC_DMA_BUSWIDTHS;
atdma->dma_common.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
atdma->dma_common.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
}
dma_writel(atdma, EN, AT_DMA_ENABLE);
......@@ -1622,7 +1656,7 @@ static void atc_suspend_cyclic(struct at_dma_chan *atchan)
if (!atc_chan_is_paused(atchan)) {
dev_warn(chan2dev(chan),
"cyclic channel not paused, should be done by channel user\n");
atc_control(chan, DMA_PAUSE, 0);
atc_pause(chan);
}
/* now preserve additional data for cyclic operations */
......
......@@ -232,7 +232,8 @@ enum atc_status {
* @save_dscr: for cyclic operations, preserve next descriptor address in
* the cyclic list on suspend/resume cycle
* @remain_desc: to save remain desc length
* @dma_sconfig: configuration for slave transfers, passed via DMA_SLAVE_CONFIG
* @dma_sconfig: configuration for slave transfers, passed via
* .device_config
* @lock: serializes enqueue/dequeue operations to descriptors lists
* @active_list: list of descriptors dmaengine is being running on
* @queue: list of descriptors ready to be submitted to engine
......
......@@ -25,6 +25,7 @@
#include <linux/dmapool.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/of_dma.h>
......@@ -174,6 +175,13 @@
#define AT_XDMAC_MAX_CHAN 0x20
#define AT_XDMAC_DMA_BUSWIDTHS\
(BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) |\
BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |\
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |\
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) |\
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES))
enum atc_status {
AT_XDMAC_CHAN_IS_CYCLIC = 0,
AT_XDMAC_CHAN_IS_PAUSED,
......@@ -184,15 +192,15 @@ struct at_xdmac_chan {
struct dma_chan chan;
void __iomem *ch_regs;
u32 mask; /* Channel Mask */
u32 cfg[3]; /* Channel Configuration Register */
#define AT_XDMAC_CUR_CFG 0 /* Current channel conf */
#define AT_XDMAC_DEV_TO_MEM_CFG 1 /* Predifined dev to mem channel conf */
#define AT_XDMAC_MEM_TO_DEV_CFG 2 /* Predifined mem to dev channel conf */
u32 cfg[2]; /* Channel Configuration Register */
#define AT_XDMAC_DEV_TO_MEM_CFG 0 /* Predifined dev to mem channel conf */
#define AT_XDMAC_MEM_TO_DEV_CFG 1 /* Predifined mem to dev channel conf */
u8 perid; /* Peripheral ID */
u8 perif; /* Peripheral Interface */
u8 memif; /* Memory Interface */
u32 per_src_addr;
u32 per_dst_addr;
u32 save_cc;
u32 save_cim;
u32 save_cnda;
u32 save_cndc;
......@@ -344,20 +352,13 @@ static void at_xdmac_start_xfer(struct at_xdmac_chan *atchan,
at_xdmac_chan_write(atchan, AT_XDMAC_CNDA, reg);
/*
* When doing memory to memory transfer we need to use the next
* When doing non cyclic transfer we need to use the next
* descriptor view 2 since some fields of the configuration register
* depend on transfer size and src/dest addresses.
*/
if (is_slave_direction(first->direction)) {
if (at_xdmac_chan_is_cyclic(atchan)) {
reg = AT_XDMAC_CNDC_NDVIEW_NDV1;
if (first->direction == DMA_MEM_TO_DEV)
atchan->cfg[AT_XDMAC_CUR_CFG] =
atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG];
else
atchan->cfg[AT_XDMAC_CUR_CFG] =
atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG];
at_xdmac_chan_write(atchan, AT_XDMAC_CC,
atchan->cfg[AT_XDMAC_CUR_CFG]);
at_xdmac_chan_write(atchan, AT_XDMAC_CC, first->lld.mbr_cfg);
} else {
/*
* No need to write AT_XDMAC_CC reg, it will be done when the
......@@ -561,7 +562,6 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
struct at_xdmac_desc *first = NULL, *prev = NULL;
struct scatterlist *sg;
int i;
u32 cfg;
unsigned int xfer_size = 0;
if (!sgl)
......@@ -583,7 +583,7 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
/* Prepare descriptors. */
for_each_sg(sgl, sg, sg_len, i) {
struct at_xdmac_desc *desc = NULL;
u32 len, mem;
u32 len, mem, dwidth, fixed_dwidth;
len = sg_dma_len(sg);
mem = sg_dma_address(sg);
......@@ -608,17 +608,21 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
if (direction == DMA_DEV_TO_MEM) {
desc->lld.mbr_sa = atchan->per_src_addr;
desc->lld.mbr_da = mem;
cfg = atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG];
desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG];
} else {
desc->lld.mbr_sa = mem;
desc->lld.mbr_da = atchan->per_dst_addr;
cfg = atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG];
desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG];
}
desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV1 /* next descriptor view */
| AT_XDMAC_MBR_UBC_NDEN /* next descriptor dst parameter update */
| AT_XDMAC_MBR_UBC_NSEN /* next descriptor src parameter update */
| (i == sg_len - 1 ? 0 : AT_XDMAC_MBR_UBC_NDE) /* descriptor fetch */
| len / (1 << at_xdmac_get_dwidth(cfg)); /* microblock length */
dwidth = at_xdmac_get_dwidth(desc->lld.mbr_cfg);
fixed_dwidth = IS_ALIGNED(len, 1 << dwidth)
? at_xdmac_get_dwidth(desc->lld.mbr_cfg)
: AT_XDMAC_CC_DWIDTH_BYTE;
desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV2 /* next descriptor view */
| AT_XDMAC_MBR_UBC_NDEN /* next descriptor dst parameter update */
| AT_XDMAC_MBR_UBC_NSEN /* next descriptor src parameter update */
| (i == sg_len - 1 ? 0 : AT_XDMAC_MBR_UBC_NDE) /* descriptor fetch */
| (len >> fixed_dwidth); /* microblock length */
dev_dbg(chan2dev(chan),
"%s: lld: mbr_sa=%pad, mbr_da=%pad, mbr_ubc=0x%08x\n",
__func__, &desc->lld.mbr_sa, &desc->lld.mbr_da, desc->lld.mbr_ubc);
......@@ -882,7 +886,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
enum dma_status ret;
int residue;
u32 cur_nda, mask, value;
u8 dwidth = at_xdmac_get_dwidth(atchan->cfg[AT_XDMAC_CUR_CFG]);
u8 dwidth = 0;
ret = dma_cookie_status(chan, cookie, txstate);
if (ret == DMA_COMPLETE)
......@@ -912,7 +916,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
*/
mask = AT_XDMAC_CC_TYPE | AT_XDMAC_CC_DSYNC;
value = AT_XDMAC_CC_TYPE_PER_TRAN | AT_XDMAC_CC_DSYNC_PER2MEM;
if ((atchan->cfg[AT_XDMAC_CUR_CFG] & mask) == value) {
if ((desc->lld.mbr_cfg & mask) == value) {
at_xdmac_write(atxdmac, AT_XDMAC_GSWF, atchan->mask);
while (!(at_xdmac_chan_read(atchan, AT_XDMAC_CIS) & AT_XDMAC_CIS_FIS))
cpu_relax();
......@@ -926,6 +930,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
*/
descs_list = &desc->descs_list;
list_for_each_entry_safe(desc, _desc, descs_list, desc_node) {
dwidth = at_xdmac_get_dwidth(desc->lld.mbr_cfg);
residue -= (desc->lld.mbr_ubc & 0xffffff) << dwidth;
if ((desc->lld.mbr_nda & 0xfffffffc) == cur_nda)
break;
......@@ -1107,58 +1112,80 @@ static void at_xdmac_issue_pending(struct dma_chan *chan)
return;
}
static int at_xdmac_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
static int at_xdmac_device_config(struct dma_chan *chan,
struct dma_slave_config *config)
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
int ret;
dev_dbg(chan2dev(chan), "%s\n", __func__);
spin_lock_bh(&atchan->lock);
ret = at_xdmac_set_slave_config(chan, config);
spin_unlock_bh(&atchan->lock);
return ret;
}
static int at_xdmac_device_pause(struct dma_chan *chan)
{
struct at_xdmac_desc *desc, *_desc;
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac *atxdmac = to_at_xdmac(atchan->chan.device);
int ret = 0;
dev_dbg(chan2dev(chan), "%s: cmd=%d\n", __func__, cmd);
dev_dbg(chan2dev(chan), "%s\n", __func__);
if (test_and_set_bit(AT_XDMAC_CHAN_IS_PAUSED, &atchan->status))
return 0;
spin_lock_bh(&atchan->lock);
at_xdmac_write(atxdmac, AT_XDMAC_GRWS, atchan->mask);
while (at_xdmac_chan_read(atchan, AT_XDMAC_CC)
& (AT_XDMAC_CC_WRIP | AT_XDMAC_CC_RDIP))
cpu_relax();
spin_unlock_bh(&atchan->lock);
switch (cmd) {
case DMA_PAUSE:
at_xdmac_write(atxdmac, AT_XDMAC_GRWS, atchan->mask);
set_bit(AT_XDMAC_CHAN_IS_PAUSED, &atchan->status);
break;
return 0;
}
case DMA_RESUME:
if (!at_xdmac_chan_is_paused(atchan))
break;
static int at_xdmac_device_resume(struct dma_chan *chan)
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac *atxdmac = to_at_xdmac(atchan->chan.device);
at_xdmac_write(atxdmac, AT_XDMAC_GRWR, atchan->mask);
clear_bit(AT_XDMAC_CHAN_IS_PAUSED, &atchan->status);
break;
dev_dbg(chan2dev(chan), "%s\n", __func__);
case DMA_TERMINATE_ALL:
at_xdmac_write(atxdmac, AT_XDMAC_GD, atchan->mask);
while (at_xdmac_read(atxdmac, AT_XDMAC_GS) & atchan->mask)
cpu_relax();
spin_lock_bh(&atchan->lock);
if (!at_xdmac_chan_is_paused(atchan))
return 0;
at_xdmac_write(atxdmac, AT_XDMAC_GRWR, atchan->mask);
clear_bit(AT_XDMAC_CHAN_IS_PAUSED, &atchan->status);
spin_unlock_bh(&atchan->lock);
/* Cancel all pending transfers. */
list_for_each_entry_safe(desc, _desc, &atchan->xfers_list, xfer_node)
at_xdmac_remove_xfer(atchan, desc);
return 0;
}
static int at_xdmac_device_terminate_all(struct dma_chan *chan)
{
struct at_xdmac_desc *desc, *_desc;
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac *atxdmac = to_at_xdmac(atchan->chan.device);
clear_bit(AT_XDMAC_CHAN_IS_CYCLIC, &atchan->status);
break;
dev_dbg(chan2dev(chan), "%s\n", __func__);
case DMA_SLAVE_CONFIG:
ret = at_xdmac_set_slave_config(chan,
(struct dma_slave_config *)arg);
break;
spin_lock_bh(&atchan->lock);
at_xdmac_write(atxdmac, AT_XDMAC_GD, atchan->mask);
while (at_xdmac_read(atxdmac, AT_XDMAC_GS) & atchan->mask)
cpu_relax();
default:
dev_err(chan2dev(chan),
"unmanaged or unknown dma control cmd: %d\n", cmd);
ret = -ENXIO;
}
/* Cancel all pending transfers. */
list_for_each_entry_safe(desc, _desc, &atchan->xfers_list, xfer_node)
at_xdmac_remove_xfer(atchan, desc);
clear_bit(AT_XDMAC_CHAN_IS_CYCLIC, &atchan->status);
spin_unlock_bh(&atchan->lock);
return ret;
return 0;
}
static int at_xdmac_alloc_chan_resources(struct dma_chan *chan)
......@@ -1217,27 +1244,6 @@ static void at_xdmac_free_chan_resources(struct dma_chan *chan)
return;
}
#define AT_XDMAC_DMA_BUSWIDTHS\
(BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) |\
BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |\
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |\
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) |\
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES))
static int at_xdmac_device_slave_caps(struct dma_chan *dchan,
struct dma_slave_caps *caps)
{
caps->src_addr_widths = AT_XDMAC_DMA_BUSWIDTHS;
caps->dstn_addr_widths = AT_XDMAC_DMA_BUSWIDTHS;
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = true;
caps->cmd_terminate = true;
caps->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
return 0;
}
#ifdef CONFIG_PM
static int atmel_xdmac_prepare(struct device *dev)
{
......@@ -1268,9 +1274,10 @@ static int atmel_xdmac_suspend(struct device *dev)
list_for_each_entry_safe(chan, _chan, &atxdmac->dma.channels, device_node) {
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
atchan->save_cc = at_xdmac_chan_read(atchan, AT_XDMAC_CC);
if (at_xdmac_chan_is_cyclic(atchan)) {
if (!at_xdmac_chan_is_paused(atchan))
at_xdmac_control(chan, DMA_PAUSE, 0);
at_xdmac_device_pause(chan);
atchan->save_cim = at_xdmac_chan_read(atchan, AT_XDMAC_CIM);
atchan->save_cnda = at_xdmac_chan_read(atchan, AT_XDMAC_CNDA);
atchan->save_cndc = at_xdmac_chan_read(atchan, AT_XDMAC_CNDC);
......@@ -1290,7 +1297,6 @@ static int atmel_xdmac_resume(struct device *dev)
struct at_xdmac_chan *atchan;
struct dma_chan *chan, *_chan;
int i;
u32 cfg;
clk_prepare_enable(atxdmac->clk);
......@@ -1305,8 +1311,7 @@ static int atmel_xdmac_resume(struct device *dev)
at_xdmac_write(atxdmac, AT_XDMAC_GE, atxdmac->save_gs);
list_for_each_entry_safe(chan, _chan, &atxdmac->dma.channels, device_node) {
atchan = to_at_xdmac_chan(chan);
cfg = atchan->cfg[AT_XDMAC_CUR_CFG];
at_xdmac_chan_write(atchan, AT_XDMAC_CC, cfg);
at_xdmac_chan_write(atchan, AT_XDMAC_CC, atchan->save_cc);
if (at_xdmac_chan_is_cyclic(atchan)) {
at_xdmac_chan_write(atchan, AT_XDMAC_CNDA, atchan->save_cnda);
at_xdmac_chan_write(atchan, AT_XDMAC_CNDC, atchan->save_cndc);
......@@ -1407,8 +1412,14 @@ static int at_xdmac_probe(struct platform_device *pdev)
atxdmac->dma.device_prep_dma_cyclic = at_xdmac_prep_dma_cyclic;
atxdmac->dma.device_prep_dma_memcpy = at_xdmac_prep_dma_memcpy;
atxdmac->dma.device_prep_slave_sg = at_xdmac_prep_slave_sg;
atxdmac->dma.device_control = at_xdmac_control;
atxdmac->dma.device_slave_caps = at_xdmac_device_slave_caps;
atxdmac->dma.device_config = at_xdmac_device_config;
atxdmac->dma.device_pause = at_xdmac_device_pause;
atxdmac->dma.device_resume = at_xdmac_device_resume;
atxdmac->dma.device_terminate_all = at_xdmac_device_terminate_all;
atxdmac->dma.src_addr_widths = AT_XDMAC_DMA_BUSWIDTHS;
atxdmac->dma.dst_addr_widths = AT_XDMAC_DMA_BUSWIDTHS;
atxdmac->dma.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
atxdmac->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
/* Disable all chans and interrupts. */
at_xdmac_off(atxdmac);
......@@ -1507,7 +1518,6 @@ static struct platform_driver at_xdmac_driver = {
.remove = at_xdmac_remove,
.driver = {
.name = "at_xdmac",
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(atmel_xdmac_dt_ids),
.pm = &atmel_xdmac_dev_pm_ops,
}
......
......@@ -436,9 +436,11 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic(
return vchan_tx_prep(&c->vc, &d->vd, flags);
}
static int bcm2835_dma_slave_config(struct bcm2835_chan *c,
struct dma_slave_config *cfg)
static int bcm2835_dma_slave_config(struct dma_chan *chan,
struct dma_slave_config *cfg)
{
struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
if ((cfg->direction == DMA_DEV_TO_MEM &&
cfg->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) ||
(cfg->direction == DMA_MEM_TO_DEV &&
......@@ -452,8 +454,9 @@ static int bcm2835_dma_slave_config(struct bcm2835_chan *c,
return 0;
}
static int bcm2835_dma_terminate_all(struct bcm2835_chan *c)
static int bcm2835_dma_terminate_all(struct dma_chan *chan)
{
struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
struct bcm2835_dmadev *d = to_bcm2835_dma_dev(c->vc.chan.device);
unsigned long flags;
int timeout = 10000;
......@@ -495,24 +498,6 @@ static int bcm2835_dma_terminate_all(struct bcm2835_chan *c)
return 0;
}
static int bcm2835_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
{
struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
switch (cmd) {
case DMA_SLAVE_CONFIG:
return bcm2835_dma_slave_config(c,
(struct dma_slave_config *)arg);
case DMA_TERMINATE_ALL:
return bcm2835_dma_terminate_all(c);
default:
return -ENXIO;
}
}
static int bcm2835_dma_chan_init(struct bcm2835_dmadev *d, int chan_id, int irq)
{
struct bcm2835_chan *c;
......@@ -565,18 +550,6 @@ static struct dma_chan *bcm2835_dma_xlate(struct of_phandle_args *spec,
return chan;
}
static int bcm2835_dma_device_slave_caps(struct dma_chan *dchan,
struct dma_slave_caps *caps)
{
caps->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
caps->dstn_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = false;
caps->cmd_terminate = true;
return 0;
}
static int bcm2835_dma_probe(struct platform_device *pdev)
{
struct bcm2835_dmadev *od;
......@@ -615,9 +588,12 @@ static int bcm2835_dma_probe(struct platform_device *pdev)
od->ddev.device_free_chan_resources = bcm2835_dma_free_chan_resources;
od->ddev.device_tx_status = bcm2835_dma_tx_status;
od->ddev.device_issue_pending = bcm2835_dma_issue_pending;
od->ddev.device_slave_caps = bcm2835_dma_device_slave_caps;
od->ddev.device_prep_dma_cyclic = bcm2835_dma_prep_dma_cyclic;
od->ddev.device_control = bcm2835_dma_control;
od->ddev.device_config = bcm2835_dma_slave_config;
od->ddev.device_terminate_all = bcm2835_dma_terminate_all;
od->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
od->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
od->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
od->ddev.dev = &pdev->dev;
INIT_LIST_HEAD(&od->ddev.channels);
spin_lock_init(&od->lock);
......
......@@ -1690,7 +1690,7 @@ static u32 coh901318_get_bytes_left(struct dma_chan *chan)
* Pauses a transfer without losing data. Enables power save.
* Use this function in conjunction with coh901318_resume.
*/
static void coh901318_pause(struct dma_chan *chan)
static int coh901318_pause(struct dma_chan *chan)
{
u32 val;
unsigned long flags;
......@@ -1730,12 +1730,13 @@ static void coh901318_pause(struct dma_chan *chan)
enable_powersave(cohc);
spin_unlock_irqrestore(&cohc->lock, flags);
return 0;
}
/* Resumes a transfer that has been stopped via 300_dma_stop(..).
Power save is handled.
*/
static void coh901318_resume(struct dma_chan *chan)
static int coh901318_resume(struct dma_chan *chan)
{
u32 val;
unsigned long flags;
......@@ -1760,6 +1761,7 @@ static void coh901318_resume(struct dma_chan *chan)
}
spin_unlock_irqrestore(&cohc->lock, flags);
return 0;
}
bool coh901318_filter_id(struct dma_chan *chan, void *chan_id)
......@@ -2114,6 +2116,57 @@ static irqreturn_t dma_irq_handler(int irq, void *dev_id)
return IRQ_HANDLED;
}
static int coh901318_terminate_all(struct dma_chan *chan)
{
unsigned long flags;
struct coh901318_chan *cohc = to_coh901318_chan(chan);
struct coh901318_desc *cohd;
void __iomem *virtbase = cohc->base->virtbase;
/* The remainder of this function terminates the transfer */
coh901318_pause(chan);
spin_lock_irqsave(&cohc->lock, flags);
/* Clear any pending BE or TC interrupt */
if (cohc->id < 32) {
writel(1 << cohc->id, virtbase + COH901318_BE_INT_CLEAR1);
writel(1 << cohc->id, virtbase + COH901318_TC_INT_CLEAR1);
} else {
writel(1 << (cohc->id - 32), virtbase +
COH901318_BE_INT_CLEAR2);
writel(1 << (cohc->id - 32), virtbase +
COH901318_TC_INT_CLEAR2);
}
enable_powersave(cohc);
while ((cohd = coh901318_first_active_get(cohc))) {
/* release the lli allocation*/
coh901318_lli_free(&cohc->base->pool, &cohd->lli);
/* return desc to free-list */
coh901318_desc_remove(cohd);
coh901318_desc_free(cohc, cohd);
}
while ((cohd = coh901318_first_queued(cohc))) {
/* release the lli allocation*/
coh901318_lli_free(&cohc->base->pool, &cohd->lli);
/* return desc to free-list */
coh901318_desc_remove(cohd);
coh901318_desc_free(cohc, cohd);
}
cohc->nbr_active_done = 0;
cohc->busy = 0;
spin_unlock_irqrestore(&cohc->lock, flags);
return 0;
}
static int coh901318_alloc_chan_resources(struct dma_chan *chan)
{
struct coh901318_chan *cohc = to_coh901318_chan(chan);
......@@ -2156,7 +2209,7 @@ coh901318_free_chan_resources(struct dma_chan *chan)
spin_unlock_irqrestore(&cohc->lock, flags);
dmaengine_terminate_all(chan);
coh901318_terminate_all(chan);
}
......@@ -2461,8 +2514,8 @@ static const struct burst_table burst_sizes[] = {
},
};
static void coh901318_dma_set_runtimeconfig(struct dma_chan *chan,
struct dma_slave_config *config)
static int coh901318_dma_set_runtimeconfig(struct dma_chan *chan,
struct dma_slave_config *config)
{
struct coh901318_chan *cohc = to_coh901318_chan(chan);
dma_addr_t addr;
......@@ -2482,7 +2535,7 @@ static void coh901318_dma_set_runtimeconfig(struct dma_chan *chan,
maxburst = config->dst_maxburst;
} else {
dev_err(COHC_2_DEV(cohc), "illegal channel mode\n");
return;
return -EINVAL;
}
dev_dbg(COHC_2_DEV(cohc), "configure channel for %d byte transfers\n",
......@@ -2528,7 +2581,7 @@ static void coh901318_dma_set_runtimeconfig(struct dma_chan *chan,
default:
dev_err(COHC_2_DEV(cohc),
"bad runtimeconfig: alien address width\n");
return;
return -EINVAL;
}
ctrl |= burst_sizes[i].reg;
......@@ -2538,84 +2591,12 @@ static void coh901318_dma_set_runtimeconfig(struct dma_chan *chan,
cohc->addr = addr;
cohc->ctrl = ctrl;
}
static int
coh901318_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
{
unsigned long flags;
struct coh901318_chan *cohc = to_coh901318_chan(chan);
struct coh901318_desc *cohd;
void __iomem *virtbase = cohc->base->virtbase;
if (cmd == DMA_SLAVE_CONFIG) {
struct dma_slave_config *config =
(struct dma_slave_config *) arg;
coh901318_dma_set_runtimeconfig(chan, config);
return 0;
}
if (cmd == DMA_PAUSE) {
coh901318_pause(chan);
return 0;
}
if (cmd == DMA_RESUME) {
coh901318_resume(chan);
return 0;
}
if (cmd != DMA_TERMINATE_ALL)
return -ENXIO;
/* The remainder of this function terminates the transfer */
coh901318_pause(chan);
spin_lock_irqsave(&cohc->lock, flags);
/* Clear any pending BE or TC interrupt */
if (cohc->id < 32) {
writel(1 << cohc->id, virtbase + COH901318_BE_INT_CLEAR1);
writel(1 << cohc->id, virtbase + COH901318_TC_INT_CLEAR1);
} else {
writel(1 << (cohc->id - 32), virtbase +
COH901318_BE_INT_CLEAR2);
writel(1 << (cohc->id - 32), virtbase +
COH901318_TC_INT_CLEAR2);
}
enable_powersave(cohc);
while ((cohd = coh901318_first_active_get(cohc))) {
/* release the lli allocation*/
coh901318_lli_free(&cohc->base->pool, &cohd->lli);
/* return desc to free-list */
coh901318_desc_remove(cohd);
coh901318_desc_free(cohc, cohd);
}
while ((cohd = coh901318_first_queued(cohc))) {
/* release the lli allocation*/
coh901318_lli_free(&cohc->base->pool, &cohd->lli);
/* return desc to free-list */
coh901318_desc_remove(cohd);
coh901318_desc_free(cohc, cohd);
}
cohc->nbr_active_done = 0;
cohc->busy = 0;
spin_unlock_irqrestore(&cohc->lock, flags);
return 0;
}
void coh901318_base_init(struct dma_device *dma, const int *pick_chans,
struct coh901318_base *base)
static void coh901318_base_init(struct dma_device *dma, const int *pick_chans,
struct coh901318_base *base)
{
int chans_i;
int i = 0;
......@@ -2717,7 +2698,10 @@ static int __init coh901318_probe(struct platform_device *pdev)
base->dma_slave.device_prep_slave_sg = coh901318_prep_slave_sg;
base->dma_slave.device_tx_status = coh901318_tx_status;
base->dma_slave.device_issue_pending = coh901318_issue_pending;
base->dma_slave.device_control = coh901318_control;
base->dma_slave.device_config = coh901318_dma_set_runtimeconfig;
base->dma_slave.device_pause = coh901318_pause;
base->dma_slave.device_resume = coh901318_resume;
base->dma_slave.device_terminate_all = coh901318_terminate_all;
base->dma_slave.dev = &pdev->dev;
err = dma_async_device_register(&base->dma_slave);
......@@ -2737,7 +2721,10 @@ static int __init coh901318_probe(struct platform_device *pdev)
base->dma_memcpy.device_prep_dma_memcpy = coh901318_prep_memcpy;
base->dma_memcpy.device_tx_status = coh901318_tx_status;
base->dma_memcpy.device_issue_pending = coh901318_issue_pending;
base->dma_memcpy.device_control = coh901318_control;
base->dma_memcpy.device_config = coh901318_dma_set_runtimeconfig;
base->dma_memcpy.device_pause = coh901318_pause;
base->dma_memcpy.device_resume = coh901318_resume;
base->dma_memcpy.device_terminate_all = coh901318_terminate_all;
base->dma_memcpy.dev = &pdev->dev;
/*
* This controller can only access address at even 32bit boundaries,
......
......@@ -525,12 +525,6 @@ static struct dma_async_tx_descriptor *cppi41_dma_prep_slave_sg(
return &c->txd;
}
static int cpp41_cfg_chan(struct cppi41_channel *c,
struct dma_slave_config *cfg)
{
return 0;
}
static void cppi41_compute_td_desc(struct cppi41_desc *d)
{
d->pd0 = DESC_TYPE_TEARD << DESC_TYPE;
......@@ -647,28 +641,6 @@ static int cppi41_stop_chan(struct dma_chan *chan)
return 0;
}
static int cppi41_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
{
struct cppi41_channel *c = to_cpp41_chan(chan);
int ret;
switch (cmd) {
case DMA_SLAVE_CONFIG:
ret = cpp41_cfg_chan(c, (struct dma_slave_config *) arg);
break;
case DMA_TERMINATE_ALL:
ret = cppi41_stop_chan(chan);
break;
default:
ret = -ENXIO;
break;
}
return ret;
}
static void cleanup_chans(struct cppi41_dd *cdd)
{
while (!list_empty(&cdd->ddev.channels)) {
......@@ -953,7 +925,7 @@ static int cppi41_dma_probe(struct platform_device *pdev)
cdd->ddev.device_tx_status = cppi41_dma_tx_status;
cdd->ddev.device_issue_pending = cppi41_dma_issue_pending;
cdd->ddev.device_prep_slave_sg = cppi41_dma_prep_slave_sg;
cdd->ddev.device_control = cppi41_dma_control;
cdd->ddev.device_terminate_all = cppi41_stop_chan;
cdd->ddev.dev = dev;
INIT_LIST_HEAD(&cdd->ddev.channels);
cpp41_dma_info.dma_cap = cdd->ddev.cap_mask;
......
......@@ -210,7 +210,7 @@ static enum jz4740_dma_transfer_size jz4740_dma_maxburst(u32 maxburst)
}
static int jz4740_dma_slave_config(struct dma_chan *c,
const struct dma_slave_config *config)
struct dma_slave_config *config)
{
struct jz4740_dmaengine_chan *chan = to_jz4740_dma_chan(c);
struct jz4740_dma_dev *dmadev = jz4740_dma_chan_get_dev(chan);
......@@ -290,21 +290,6 @@ static int jz4740_dma_terminate_all(struct dma_chan *c)
return 0;
}
static int jz4740_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
{
struct dma_slave_config *config = (struct dma_slave_config *)arg;
switch (cmd) {
case DMA_SLAVE_CONFIG:
return jz4740_dma_slave_config(chan, config);
case DMA_TERMINATE_ALL:
return jz4740_dma_terminate_all(chan);
default:
return -ENOSYS;
}
}
static int jz4740_dma_start_transfer(struct jz4740_dmaengine_chan *chan)
{
struct jz4740_dma_dev *dmadev = jz4740_dma_chan_get_dev(chan);
......@@ -561,7 +546,8 @@ static int jz4740_dma_probe(struct platform_device *pdev)
dd->device_issue_pending = jz4740_dma_issue_pending;
dd->device_prep_slave_sg = jz4740_dma_prep_slave_sg;
dd->device_prep_dma_cyclic = jz4740_dma_prep_dma_cyclic;
dd->device_control = jz4740_dma_control;
dd->device_config = jz4740_dma_slave_config;
dd->device_terminate_all = jz4740_dma_terminate_all;
dd->dev = &pdev->dev;
INIT_LIST_HEAD(&dd->channels);
......
......@@ -222,31 +222,35 @@ static void balance_ref_count(struct dma_chan *chan)
*/
static int dma_chan_get(struct dma_chan *chan)
{
int err = -ENODEV;
struct module *owner = dma_chan_to_owner(chan);
int ret;
/* The channel is already in use, update client count */
if (chan->client_count) {
__module_get(owner);
err = 0;
} else if (try_module_get(owner))
err = 0;
goto out;
}
if (err == 0)
chan->client_count++;
if (!try_module_get(owner))
return -ENODEV;
/* allocate upon first client reference */
if (chan->client_count == 1 && err == 0) {
int desc_cnt = chan->device->device_alloc_chan_resources(chan);
if (desc_cnt < 0) {
err = desc_cnt;
chan->client_count = 0;
module_put(owner);
} else if (!dma_has_cap(DMA_PRIVATE, chan->device->cap_mask))
balance_ref_count(chan);
if (chan->device->device_alloc_chan_resources) {
ret = chan->device->device_alloc_chan_resources(chan);
if (ret < 0)
goto err_out;
}
return err;
if (!dma_has_cap(DMA_PRIVATE, chan->device->cap_mask))
balance_ref_count(chan);
out:
chan->client_count++;
return 0;
err_out:
module_put(owner);
return ret;
}
/**
......@@ -257,11 +261,15 @@ static int dma_chan_get(struct dma_chan *chan)
*/
static void dma_chan_put(struct dma_chan *chan)
{
/* This channel is not in use, bail out */
if (!chan->client_count)
return; /* this channel failed alloc_chan_resources */
return;
chan->client_count--;
module_put(dma_chan_to_owner(chan));
if (chan->client_count == 0)
/* This channel is not in use anymore, free it */
if (!chan->client_count && chan->device->device_free_chan_resources)
chan->device->device_free_chan_resources(chan);
}
......@@ -471,6 +479,39 @@ static void dma_channel_rebalance(void)
}
}
int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps)
{
struct dma_device *device;
if (!chan || !caps)
return -EINVAL;
device = chan->device;
/* check if the channel supports slave transactions */
if (!test_bit(DMA_SLAVE, device->cap_mask.bits))
return -ENXIO;
/*
* Check whether it reports it uses the generic slave
* capabilities, if not, that means it doesn't support any
* kind of slave capabilities reporting.
*/
if (!device->directions)
return -ENXIO;
caps->src_addr_widths = device->src_addr_widths;
caps->dst_addr_widths = device->dst_addr_widths;
caps->directions = device->directions;
caps->residue_granularity = device->residue_granularity;
caps->cmd_pause = !!device->device_pause;
caps->cmd_terminate = !!device->device_terminate_all;
return 0;
}
EXPORT_SYMBOL_GPL(dma_get_slave_caps);
static struct dma_chan *private_candidate(const dma_cap_mask_t *mask,
struct dma_device *dev,
dma_filter_fn fn, void *fn_param)
......@@ -811,17 +852,16 @@ int dma_async_device_register(struct dma_device *device)
!device->device_prep_dma_sg);
BUG_ON(dma_has_cap(DMA_CYCLIC, device->cap_mask) &&
!device->device_prep_dma_cyclic);
BUG_ON(dma_has_cap(DMA_SLAVE, device->cap_mask) &&
!device->device_control);
BUG_ON(dma_has_cap(DMA_INTERLEAVE, device->cap_mask) &&
!device->device_prep_interleaved_dma);
BUG_ON(!device->device_alloc_chan_resources);
BUG_ON(!device->device_free_chan_resources);
BUG_ON(!device->device_tx_status);
BUG_ON(!device->device_issue_pending);
BUG_ON(!device->dev);
WARN(dma_has_cap(DMA_SLAVE, device->cap_mask) && !device->directions,
"this driver doesn't support generic slave capabilities reporting\n");
/* note: this only matters in the
* CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH=n case
*/
......
......@@ -349,14 +349,14 @@ static void dbg_result(const char *err, unsigned int n, unsigned int src_off,
unsigned long data)
{
pr_debug("%s: result #%u: '%s' with src_off=0x%x dst_off=0x%x len=0x%x (%lu)\n",
current->comm, n, err, src_off, dst_off, len, data);
current->comm, n, err, src_off, dst_off, len, data);
}
#define verbose_result(err, n, src_off, dst_off, len, data) ({ \
if (verbose) \
result(err, n, src_off, dst_off, len, data); \
else \
dbg_result(err, n, src_off, dst_off, len, data); \
#define verbose_result(err, n, src_off, dst_off, len, data) ({ \
if (verbose) \
result(err, n, src_off, dst_off, len, data); \
else \
dbg_result(err, n, src_off, dst_off, len, data);\
})
static unsigned long long dmatest_persec(s64 runtime, unsigned int val)
......@@ -405,7 +405,6 @@ static int dmatest_func(void *data)
struct dmatest_params *params;
struct dma_chan *chan;
struct dma_device *dev;
unsigned int src_off, dst_off, len;
unsigned int error_count;
unsigned int failed_tests = 0;
unsigned int total_tests = 0;
......@@ -484,6 +483,7 @@ static int dmatest_func(void *data)
struct dmaengine_unmap_data *um;
dma_addr_t srcs[src_cnt];
dma_addr_t *dsts;
unsigned int src_off, dst_off, len;
u8 align = 0;
total_tests++;
......@@ -502,15 +502,21 @@ static int dmatest_func(void *data)
break;
}
if (params->noverify) {
if (params->noverify)
len = params->buf_size;
else
len = dmatest_random() % params->buf_size + 1;
len = (len >> align) << align;
if (!len)
len = 1 << align;
total_len += len;
if (params->noverify) {
src_off = 0;
dst_off = 0;
} else {
len = dmatest_random() % params->buf_size + 1;
len = (len >> align) << align;
if (!len)
len = 1 << align;
src_off = dmatest_random() % (params->buf_size - len + 1);
dst_off = dmatest_random() % (params->buf_size - len + 1);
......@@ -523,11 +529,6 @@ static int dmatest_func(void *data)
params->buf_size);
}
len = (len >> align) << align;
if (!len)
len = 1 << align;
total_len += len;
um = dmaengine_get_unmap_data(dev->dev, src_cnt+dst_cnt,
GFP_KERNEL);
if (!um) {
......
......@@ -61,6 +61,13 @@
*/
#define NR_DESCS_PER_CHANNEL 64
/* The set of bus widths supported by the DMA controller */
#define DW_DMA_BUSWIDTHS \
BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) | \
BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES)
/*----------------------------------------------------------------------*/
static struct device *chan2dev(struct dma_chan *chan)
......@@ -955,8 +962,7 @@ static inline void convert_burst(u32 *maxburst)
*maxburst = 0;
}
static int
set_runtime_config(struct dma_chan *chan, struct dma_slave_config *sconfig)
static int dwc_config(struct dma_chan *chan, struct dma_slave_config *sconfig)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
......@@ -973,16 +979,25 @@ set_runtime_config(struct dma_chan *chan, struct dma_slave_config *sconfig)
return 0;
}
static inline void dwc_chan_pause(struct dw_dma_chan *dwc)
static int dwc_pause(struct dma_chan *chan)
{
u32 cfglo = channel_readl(dwc, CFG_LO);
unsigned int count = 20; /* timeout iterations */
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
unsigned long flags;
unsigned int count = 20; /* timeout iterations */
u32 cfglo;
spin_lock_irqsave(&dwc->lock, flags);
cfglo = channel_readl(dwc, CFG_LO);
channel_writel(dwc, CFG_LO, cfglo | DWC_CFGL_CH_SUSP);
while (!(channel_readl(dwc, CFG_LO) & DWC_CFGL_FIFO_EMPTY) && count--)
udelay(2);
dwc->paused = true;
spin_unlock_irqrestore(&dwc->lock, flags);
return 0;
}
static inline void dwc_chan_resume(struct dw_dma_chan *dwc)
......@@ -994,53 +1009,48 @@ static inline void dwc_chan_resume(struct dw_dma_chan *dwc)
dwc->paused = false;
}
static int dwc_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
static int dwc_resume(struct dma_chan *chan)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
struct dw_dma *dw = to_dw_dma(chan->device);
struct dw_desc *desc, *_desc;
unsigned long flags;
LIST_HEAD(list);
if (cmd == DMA_PAUSE) {
spin_lock_irqsave(&dwc->lock, flags);
if (!dwc->paused)
return 0;
dwc_chan_pause(dwc);
spin_lock_irqsave(&dwc->lock, flags);
spin_unlock_irqrestore(&dwc->lock, flags);
} else if (cmd == DMA_RESUME) {
if (!dwc->paused)
return 0;
dwc_chan_resume(dwc);
spin_lock_irqsave(&dwc->lock, flags);
spin_unlock_irqrestore(&dwc->lock, flags);
dwc_chan_resume(dwc);
return 0;
}
spin_unlock_irqrestore(&dwc->lock, flags);
} else if (cmd == DMA_TERMINATE_ALL) {
spin_lock_irqsave(&dwc->lock, flags);
static int dwc_terminate_all(struct dma_chan *chan)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
struct dw_dma *dw = to_dw_dma(chan->device);
struct dw_desc *desc, *_desc;
unsigned long flags;
LIST_HEAD(list);
clear_bit(DW_DMA_IS_SOFT_LLP, &dwc->flags);
spin_lock_irqsave(&dwc->lock, flags);
dwc_chan_disable(dw, dwc);
clear_bit(DW_DMA_IS_SOFT_LLP, &dwc->flags);
dwc_chan_disable(dw, dwc);
dwc_chan_resume(dwc);
dwc_chan_resume(dwc);
/* active_list entries will end up before queued entries */
list_splice_init(&dwc->queue, &list);
list_splice_init(&dwc->active_list, &list);
/* active_list entries will end up before queued entries */
list_splice_init(&dwc->queue, &list);
list_splice_init(&dwc->active_list, &list);
spin_unlock_irqrestore(&dwc->lock, flags);
spin_unlock_irqrestore(&dwc->lock, flags);
/* Flush all pending and queued descriptors */
list_for_each_entry_safe(desc, _desc, &list, desc_node)
dwc_descriptor_complete(dwc, desc, false);
} else if (cmd == DMA_SLAVE_CONFIG) {
return set_runtime_config(chan, (struct dma_slave_config *)arg);
} else {
return -ENXIO;
}
/* Flush all pending and queued descriptors */
list_for_each_entry_safe(desc, _desc, &list, desc_node)
dwc_descriptor_complete(dwc, desc, false);
return 0;
}
......@@ -1551,7 +1561,8 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
}
} else {
dw->nr_masters = pdata->nr_masters;
memcpy(dw->data_width, pdata->data_width, 4);
for (i = 0; i < dw->nr_masters; i++)
dw->data_width[i] = pdata->data_width[i];
}
/* Calculate all channel mask before DMA setup */
......@@ -1656,13 +1667,23 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
dw->dma.device_free_chan_resources = dwc_free_chan_resources;
dw->dma.device_prep_dma_memcpy = dwc_prep_dma_memcpy;
dw->dma.device_prep_slave_sg = dwc_prep_slave_sg;
dw->dma.device_control = dwc_control;
dw->dma.device_config = dwc_config;
dw->dma.device_pause = dwc_pause;
dw->dma.device_resume = dwc_resume;
dw->dma.device_terminate_all = dwc_terminate_all;
dw->dma.device_tx_status = dwc_tx_status;
dw->dma.device_issue_pending = dwc_issue_pending;
/* DMA capabilities */
dw->dma.src_addr_widths = DW_DMA_BUSWIDTHS;
dw->dma.dst_addr_widths = DW_DMA_BUSWIDTHS;
dw->dma.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV) |
BIT(DMA_MEM_TO_MEM);
dw->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
err = dma_async_device_register(&dw->dma);
if (err)
goto err_dma_register;
......
......@@ -100,7 +100,7 @@ dw_dma_parse_dt(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct dw_dma_platform_data *pdata;
u32 tmp, arr[4];
u32 tmp, arr[DW_DMA_MAX_NR_MASTERS];
if (!np) {
dev_err(&pdev->dev, "Missing DT data\n");
......@@ -127,7 +127,7 @@ dw_dma_parse_dt(struct platform_device *pdev)
pdata->block_size = tmp;
if (!of_property_read_u32(np, "dma-masters", &tmp)) {
if (tmp > 4)
if (tmp > DW_DMA_MAX_NR_MASTERS)
return NULL;
pdata->nr_masters = tmp;
......
......@@ -252,7 +252,7 @@ struct dw_dma_chan {
u8 src_master;
u8 dst_master;
/* configuration passed via DMA_SLAVE_CONFIG */
/* configuration passed via .device_config */
struct dma_slave_config dma_sconfig;
};
......@@ -285,7 +285,7 @@ struct dw_dma {
/* hardware configuration */
unsigned char nr_masters;
unsigned char data_width[4];
unsigned char data_width[DW_DMA_MAX_NR_MASTERS];
};
static inline struct dw_dma_regs __iomem *__dw_regs(struct dw_dma *dw)
......
......@@ -15,6 +15,7 @@
#include <linux/dmaengine.h>
#include <linux/dma-mapping.h>
#include <linux/edma.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/interrupt.h>
......@@ -244,8 +245,9 @@ static void edma_execute(struct edma_chan *echan)
}
}
static int edma_terminate_all(struct edma_chan *echan)
static int edma_terminate_all(struct dma_chan *chan)
{
struct edma_chan *echan = to_edma_chan(chan);
unsigned long flags;
LIST_HEAD(head);
......@@ -273,9 +275,11 @@ static int edma_terminate_all(struct edma_chan *echan)
return 0;
}
static int edma_slave_config(struct edma_chan *echan,
static int edma_slave_config(struct dma_chan *chan,
struct dma_slave_config *cfg)
{
struct edma_chan *echan = to_edma_chan(chan);
if (cfg->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES ||
cfg->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
return -EINVAL;
......@@ -285,8 +289,10 @@ static int edma_slave_config(struct edma_chan *echan,
return 0;
}
static int edma_dma_pause(struct edma_chan *echan)
static int edma_dma_pause(struct dma_chan *chan)
{
struct edma_chan *echan = to_edma_chan(chan);
/* Pause/Resume only allowed with cyclic mode */
if (!echan->edesc || !echan->edesc->cyclic)
return -EINVAL;
......@@ -295,8 +301,10 @@ static int edma_dma_pause(struct edma_chan *echan)
return 0;
}
static int edma_dma_resume(struct edma_chan *echan)
static int edma_dma_resume(struct dma_chan *chan)
{
struct edma_chan *echan = to_edma_chan(chan);
/* Pause/Resume only allowed with cyclic mode */
if (!echan->edesc->cyclic)
return -EINVAL;
......@@ -305,36 +313,6 @@ static int edma_dma_resume(struct edma_chan *echan)
return 0;
}
static int edma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
{
int ret = 0;
struct dma_slave_config *config;
struct edma_chan *echan = to_edma_chan(chan);
switch (cmd) {
case DMA_TERMINATE_ALL:
edma_terminate_all(echan);
break;
case DMA_SLAVE_CONFIG:
config = (struct dma_slave_config *)arg;
ret = edma_slave_config(echan, config);
break;
case DMA_PAUSE:
ret = edma_dma_pause(echan);
break;
case DMA_RESUME:
ret = edma_dma_resume(echan);
break;
default:
ret = -ENOSYS;
}
return ret;
}
/*
* A PaRAM set configuration abstraction used by other modes
* @chan: Channel who's PaRAM set we're configuring
......@@ -557,7 +535,7 @@ static struct dma_async_tx_descriptor *edma_prep_slave_sg(
return vchan_tx_prep(&echan->vchan, &edesc->vdesc, tx_flags);
}
struct dma_async_tx_descriptor *edma_prep_dma_memcpy(
static struct dma_async_tx_descriptor *edma_prep_dma_memcpy(
struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
size_t len, unsigned long tx_flags)
{
......@@ -994,19 +972,6 @@ static void __init edma_chan_init(struct edma_cc *ecc,
BIT(DMA_SLAVE_BUSWIDTH_3_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES))
static int edma_dma_device_slave_caps(struct dma_chan *dchan,
struct dma_slave_caps *caps)
{
caps->src_addr_widths = EDMA_DMA_BUSWIDTHS;
caps->dstn_addr_widths = EDMA_DMA_BUSWIDTHS;
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = true;
caps->cmd_terminate = true;
caps->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
return 0;
}
static void edma_dma_init(struct edma_cc *ecc, struct dma_device *dma,
struct device *dev)
{
......@@ -1017,8 +982,16 @@ static void edma_dma_init(struct edma_cc *ecc, struct dma_device *dma,
dma->device_free_chan_resources = edma_free_chan_resources;
dma->device_issue_pending = edma_issue_pending;
dma->device_tx_status = edma_tx_status;
dma->device_control = edma_control;
dma->device_slave_caps = edma_dma_device_slave_caps;
dma->device_config = edma_slave_config;
dma->device_pause = edma_dma_pause;
dma->device_resume = edma_dma_resume;
dma->device_terminate_all = edma_terminate_all;
dma->src_addr_widths = EDMA_DMA_BUSWIDTHS;
dma->dst_addr_widths = EDMA_DMA_BUSWIDTHS;
dma->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
dma->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
dma->dev = dev;
/*
......
......@@ -144,7 +144,7 @@ struct ep93xx_dma_desc {
* @queue: pending descriptors which are handled next
* @free_list: list of free descriptors which can be used
* @runtime_addr: physical address currently used as dest/src (M2M only). This
* is set via %DMA_SLAVE_CONFIG before slave operation is
* is set via .device_config before slave operation is
* prepared
* @runtime_ctrl: M2M runtime values for the control register.
*
......@@ -1164,13 +1164,14 @@ ep93xx_dma_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t dma_addr,
/**
* ep93xx_dma_terminate_all - terminate all transactions
* @edmac: channel
* @chan: channel
*
* Stops all DMA transactions. All descriptors are put back to the
* @edmac->free_list and callbacks are _not_ called.
*/
static int ep93xx_dma_terminate_all(struct ep93xx_dma_chan *edmac)
static int ep93xx_dma_terminate_all(struct dma_chan *chan)
{
struct ep93xx_dma_chan *edmac = to_ep93xx_dma_chan(chan);
struct ep93xx_dma_desc *desc, *_d;
unsigned long flags;
LIST_HEAD(list);
......@@ -1194,9 +1195,10 @@ static int ep93xx_dma_terminate_all(struct ep93xx_dma_chan *edmac)
return 0;
}
static int ep93xx_dma_slave_config(struct ep93xx_dma_chan *edmac,
static int ep93xx_dma_slave_config(struct dma_chan *chan,
struct dma_slave_config *config)
{
struct ep93xx_dma_chan *edmac = to_ep93xx_dma_chan(chan);
enum dma_slave_buswidth width;
unsigned long flags;
u32 addr, ctrl;
......@@ -1241,36 +1243,6 @@ static int ep93xx_dma_slave_config(struct ep93xx_dma_chan *edmac,
return 0;
}
/**
* ep93xx_dma_control - manipulate all pending operations on a channel
* @chan: channel
* @cmd: control command to perform
* @arg: optional argument
*
* Controls the channel. Function returns %0 in case of success or negative
* error in case of failure.
*/
static int ep93xx_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
{
struct ep93xx_dma_chan *edmac = to_ep93xx_dma_chan(chan);
struct dma_slave_config *config;
switch (cmd) {
case DMA_TERMINATE_ALL:
return ep93xx_dma_terminate_all(edmac);
case DMA_SLAVE_CONFIG:
config = (struct dma_slave_config *)arg;
return ep93xx_dma_slave_config(edmac, config);
default:
break;
}
return -ENOSYS;
}
/**
* ep93xx_dma_tx_status - check if a transaction is completed
* @chan: channel
......@@ -1352,7 +1324,8 @@ static int __init ep93xx_dma_probe(struct platform_device *pdev)
dma_dev->device_free_chan_resources = ep93xx_dma_free_chan_resources;
dma_dev->device_prep_slave_sg = ep93xx_dma_prep_slave_sg;
dma_dev->device_prep_dma_cyclic = ep93xx_dma_prep_dma_cyclic;
dma_dev->device_control = ep93xx_dma_control;
dma_dev->device_config = ep93xx_dma_slave_config;
dma_dev->device_terminate_all = ep93xx_dma_terminate_all;
dma_dev->device_issue_pending = ep93xx_dma_issue_pending;
dma_dev->device_tx_status = ep93xx_dma_tx_status;
......
......@@ -289,62 +289,69 @@ static void fsl_edma_free_desc(struct virt_dma_desc *vdesc)
kfree(fsl_desc);
}
static int fsl_edma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
static int fsl_edma_terminate_all(struct dma_chan *chan)
{
struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
struct dma_slave_config *cfg = (void *)arg;
unsigned long flags;
LIST_HEAD(head);
switch (cmd) {
case DMA_TERMINATE_ALL:
spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
fsl_edma_disable_request(fsl_chan);
fsl_chan->edesc = NULL;
vchan_get_all_descriptors(&fsl_chan->vchan, &head);
spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
vchan_dma_desc_free_list(&fsl_chan->vchan, &head);
return 0;
}
static int fsl_edma_pause(struct dma_chan *chan)
{
struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
unsigned long flags;
spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
if (fsl_chan->edesc) {
fsl_edma_disable_request(fsl_chan);
fsl_chan->edesc = NULL;
vchan_get_all_descriptors(&fsl_chan->vchan, &head);
spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
vchan_dma_desc_free_list(&fsl_chan->vchan, &head);
return 0;
case DMA_SLAVE_CONFIG:
fsl_chan->fsc.dir = cfg->direction;
if (cfg->direction == DMA_DEV_TO_MEM) {
fsl_chan->fsc.dev_addr = cfg->src_addr;
fsl_chan->fsc.addr_width = cfg->src_addr_width;
fsl_chan->fsc.burst = cfg->src_maxburst;
fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg->src_addr_width);
} else if (cfg->direction == DMA_MEM_TO_DEV) {
fsl_chan->fsc.dev_addr = cfg->dst_addr;
fsl_chan->fsc.addr_width = cfg->dst_addr_width;
fsl_chan->fsc.burst = cfg->dst_maxburst;
fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg->dst_addr_width);
} else {
return -EINVAL;
}
return 0;
fsl_chan->status = DMA_PAUSED;
}
spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
return 0;
}
case DMA_PAUSE:
spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
if (fsl_chan->edesc) {
fsl_edma_disable_request(fsl_chan);
fsl_chan->status = DMA_PAUSED;
}
spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
return 0;
case DMA_RESUME:
spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
if (fsl_chan->edesc) {
fsl_edma_enable_request(fsl_chan);
fsl_chan->status = DMA_IN_PROGRESS;
}
spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
return 0;
static int fsl_edma_resume(struct dma_chan *chan)
{
struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
unsigned long flags;
default:
return -ENXIO;
spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
if (fsl_chan->edesc) {
fsl_edma_enable_request(fsl_chan);
fsl_chan->status = DMA_IN_PROGRESS;
}
spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
return 0;
}
static int fsl_edma_slave_config(struct dma_chan *chan,
struct dma_slave_config *cfg)
{
struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
fsl_chan->fsc.dir = cfg->direction;
if (cfg->direction == DMA_DEV_TO_MEM) {
fsl_chan->fsc.dev_addr = cfg->src_addr;
fsl_chan->fsc.addr_width = cfg->src_addr_width;
fsl_chan->fsc.burst = cfg->src_maxburst;
fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg->src_addr_width);
} else if (cfg->direction == DMA_MEM_TO_DEV) {
fsl_chan->fsc.dev_addr = cfg->dst_addr;
fsl_chan->fsc.addr_width = cfg->dst_addr_width;
fsl_chan->fsc.burst = cfg->dst_maxburst;
fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg->dst_addr_width);
} else {
return -EINVAL;
}
return 0;
}
static size_t fsl_edma_desc_residue(struct fsl_edma_chan *fsl_chan,
......@@ -780,18 +787,6 @@ static void fsl_edma_free_chan_resources(struct dma_chan *chan)
fsl_chan->tcd_pool = NULL;
}
static int fsl_dma_device_slave_caps(struct dma_chan *dchan,
struct dma_slave_caps *caps)
{
caps->src_addr_widths = FSL_EDMA_BUSWIDTHS;
caps->dstn_addr_widths = FSL_EDMA_BUSWIDTHS;
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = true;
caps->cmd_terminate = true;
return 0;
}
static int
fsl_edma_irq_init(struct platform_device *pdev, struct fsl_edma_engine *fsl_edma)
{
......@@ -917,9 +912,15 @@ static int fsl_edma_probe(struct platform_device *pdev)
fsl_edma->dma_dev.device_tx_status = fsl_edma_tx_status;
fsl_edma->dma_dev.device_prep_slave_sg = fsl_edma_prep_slave_sg;
fsl_edma->dma_dev.device_prep_dma_cyclic = fsl_edma_prep_dma_cyclic;
fsl_edma->dma_dev.device_control = fsl_edma_control;
fsl_edma->dma_dev.device_config = fsl_edma_slave_config;
fsl_edma->dma_dev.device_pause = fsl_edma_pause;
fsl_edma->dma_dev.device_resume = fsl_edma_resume;
fsl_edma->dma_dev.device_terminate_all = fsl_edma_terminate_all;
fsl_edma->dma_dev.device_issue_pending = fsl_edma_issue_pending;
fsl_edma->dma_dev.device_slave_caps = fsl_dma_device_slave_caps;
fsl_edma->dma_dev.src_addr_widths = FSL_EDMA_BUSWIDTHS;
fsl_edma->dma_dev.dst_addr_widths = FSL_EDMA_BUSWIDTHS;
fsl_edma->dma_dev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
platform_set_drvdata(pdev, fsl_edma);
......
......@@ -941,84 +941,56 @@ static struct dma_async_tx_descriptor *fsl_dma_prep_sg(struct dma_chan *dchan,
return NULL;
}
/**
* fsl_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
* @chan: DMA channel
* @sgl: scatterlist to transfer to/from
* @sg_len: number of entries in @scatterlist
* @direction: DMA direction
* @flags: DMAEngine flags
* @context: transaction context (ignored)
*
* Prepare a set of descriptors for a DMA_SLAVE transaction. Following the
* DMA_SLAVE API, this gets the device-specific information from the
* chan->private variable.
*/
static struct dma_async_tx_descriptor *fsl_dma_prep_slave_sg(
struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
enum dma_transfer_direction direction, unsigned long flags,
void *context)
static int fsl_dma_device_terminate_all(struct dma_chan *dchan)
{
/*
* This operation is not supported on the Freescale DMA controller
*
* However, we need to provide the function pointer to allow the
* device_control() method to work.
*/
return NULL;
}
static int fsl_dma_device_control(struct dma_chan *dchan,
enum dma_ctrl_cmd cmd, unsigned long arg)
{
struct dma_slave_config *config;
struct fsldma_chan *chan;
int size;
if (!dchan)
return -EINVAL;
chan = to_fsl_chan(dchan);
switch (cmd) {
case DMA_TERMINATE_ALL:
spin_lock_bh(&chan->desc_lock);
/* Halt the DMA engine */
dma_halt(chan);
spin_lock_bh(&chan->desc_lock);
/* Remove and free all of the descriptors in the LD queue */
fsldma_free_desc_list(chan, &chan->ld_pending);
fsldma_free_desc_list(chan, &chan->ld_running);
fsldma_free_desc_list(chan, &chan->ld_completed);
chan->idle = true;
/* Halt the DMA engine */
dma_halt(chan);
spin_unlock_bh(&chan->desc_lock);
return 0;
/* Remove and free all of the descriptors in the LD queue */
fsldma_free_desc_list(chan, &chan->ld_pending);
fsldma_free_desc_list(chan, &chan->ld_running);
fsldma_free_desc_list(chan, &chan->ld_completed);
chan->idle = true;
case DMA_SLAVE_CONFIG:
config = (struct dma_slave_config *)arg;
spin_unlock_bh(&chan->desc_lock);
return 0;
}
/* make sure the channel supports setting burst size */
if (!chan->set_request_count)
return -ENXIO;
static int fsl_dma_device_config(struct dma_chan *dchan,
struct dma_slave_config *config)
{
struct fsldma_chan *chan;
int size;
/* we set the controller burst size depending on direction */
if (config->direction == DMA_MEM_TO_DEV)
size = config->dst_addr_width * config->dst_maxburst;
else
size = config->src_addr_width * config->src_maxburst;
if (!dchan)
return -EINVAL;
chan->set_request_count(chan, size);
return 0;
chan = to_fsl_chan(dchan);
default:
/* make sure the channel supports setting burst size */
if (!chan->set_request_count)
return -ENXIO;
}
/* we set the controller burst size depending on direction */
if (config->direction == DMA_MEM_TO_DEV)
size = config->dst_addr_width * config->dst_maxburst;
else
size = config->src_addr_width * config->src_maxburst;
chan->set_request_count(chan, size);
return 0;
}
/**
* fsl_dma_memcpy_issue_pending - Issue the DMA start command
* @chan : Freescale DMA channel
......@@ -1395,10 +1367,15 @@ static int fsldma_of_probe(struct platform_device *op)
fdev->common.device_prep_dma_sg = fsl_dma_prep_sg;
fdev->common.device_tx_status = fsl_tx_status;
fdev->common.device_issue_pending = fsl_dma_memcpy_issue_pending;
fdev->common.device_prep_slave_sg = fsl_dma_prep_slave_sg;
fdev->common.device_control = fsl_dma_device_control;
fdev->common.device_config = fsl_dma_device_config;
fdev->common.device_terminate_all = fsl_dma_device_terminate_all;
fdev->common.dev = &op->dev;
fdev->common.src_addr_widths = FSL_DMA_BUSWIDTHS;
fdev->common.dst_addr_widths = FSL_DMA_BUSWIDTHS;
fdev->common.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
fdev->common.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
dma_set_mask(&(op->dev), DMA_BIT_MASK(36));
platform_set_drvdata(op, fdev);
......
......@@ -83,6 +83,10 @@
#define FSL_DMA_DGSR_EOSI 0x02
#define FSL_DMA_DGSR_EOLSI 0x01
#define FSL_DMA_BUSWIDTHS (BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES))
typedef u64 __bitwise v64;
typedef u32 __bitwise v32;
......
此差异已折叠。
......@@ -230,11 +230,6 @@ static inline int is_imx1_dma(struct imxdma_engine *imxdma)
return imxdma->devtype == IMX1_DMA;
}
static inline int is_imx21_dma(struct imxdma_engine *imxdma)
{
return imxdma->devtype == IMX21_DMA;
}
static inline int is_imx27_dma(struct imxdma_engine *imxdma)
{
return imxdma->devtype == IMX27_DMA;
......@@ -669,69 +664,67 @@ static void imxdma_tasklet(unsigned long data)
}
static int imxdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
static int imxdma_terminate_all(struct dma_chan *chan)
{
struct imxdma_channel *imxdmac = to_imxdma_chan(chan);
struct dma_slave_config *dmaengine_cfg = (void *)arg;
struct imxdma_engine *imxdma = imxdmac->imxdma;
unsigned long flags;
unsigned int mode = 0;
switch (cmd) {
case DMA_TERMINATE_ALL:
imxdma_disable_hw(imxdmac);
spin_lock_irqsave(&imxdma->lock, flags);
list_splice_tail_init(&imxdmac->ld_active, &imxdmac->ld_free);
list_splice_tail_init(&imxdmac->ld_queue, &imxdmac->ld_free);
spin_unlock_irqrestore(&imxdma->lock, flags);
return 0;
case DMA_SLAVE_CONFIG:
if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) {
imxdmac->per_address = dmaengine_cfg->src_addr;
imxdmac->watermark_level = dmaengine_cfg->src_maxburst;
imxdmac->word_size = dmaengine_cfg->src_addr_width;
} else {
imxdmac->per_address = dmaengine_cfg->dst_addr;
imxdmac->watermark_level = dmaengine_cfg->dst_maxburst;
imxdmac->word_size = dmaengine_cfg->dst_addr_width;
}
switch (imxdmac->word_size) {
case DMA_SLAVE_BUSWIDTH_1_BYTE:
mode = IMX_DMA_MEMSIZE_8;
break;
case DMA_SLAVE_BUSWIDTH_2_BYTES:
mode = IMX_DMA_MEMSIZE_16;
break;
default:
case DMA_SLAVE_BUSWIDTH_4_BYTES:
mode = IMX_DMA_MEMSIZE_32;
break;
}
imxdma_disable_hw(imxdmac);
imxdmac->hw_chaining = 0;
spin_lock_irqsave(&imxdma->lock, flags);
list_splice_tail_init(&imxdmac->ld_active, &imxdmac->ld_free);
list_splice_tail_init(&imxdmac->ld_queue, &imxdmac->ld_free);
spin_unlock_irqrestore(&imxdma->lock, flags);
return 0;
}
imxdmac->ccr_from_device = (mode | IMX_DMA_TYPE_FIFO) |
((IMX_DMA_MEMSIZE_32 | IMX_DMA_TYPE_LINEAR) << 2) |
CCR_REN;
imxdmac->ccr_to_device =
(IMX_DMA_MEMSIZE_32 | IMX_DMA_TYPE_LINEAR) |
((mode | IMX_DMA_TYPE_FIFO) << 2) | CCR_REN;
imx_dmav1_writel(imxdma, imxdmac->dma_request,
DMA_RSSR(imxdmac->channel));
static int imxdma_config(struct dma_chan *chan,
struct dma_slave_config *dmaengine_cfg)
{
struct imxdma_channel *imxdmac = to_imxdma_chan(chan);
struct imxdma_engine *imxdma = imxdmac->imxdma;
unsigned int mode = 0;
/* Set burst length */
imx_dmav1_writel(imxdma, imxdmac->watermark_level *
imxdmac->word_size, DMA_BLR(imxdmac->channel));
if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) {
imxdmac->per_address = dmaengine_cfg->src_addr;
imxdmac->watermark_level = dmaengine_cfg->src_maxburst;
imxdmac->word_size = dmaengine_cfg->src_addr_width;
} else {
imxdmac->per_address = dmaengine_cfg->dst_addr;
imxdmac->watermark_level = dmaengine_cfg->dst_maxburst;
imxdmac->word_size = dmaengine_cfg->dst_addr_width;
}
return 0;
switch (imxdmac->word_size) {
case DMA_SLAVE_BUSWIDTH_1_BYTE:
mode = IMX_DMA_MEMSIZE_8;
break;
case DMA_SLAVE_BUSWIDTH_2_BYTES:
mode = IMX_DMA_MEMSIZE_16;
break;
default:
return -ENOSYS;
case DMA_SLAVE_BUSWIDTH_4_BYTES:
mode = IMX_DMA_MEMSIZE_32;
break;
}
return -EINVAL;
imxdmac->hw_chaining = 0;
imxdmac->ccr_from_device = (mode | IMX_DMA_TYPE_FIFO) |
((IMX_DMA_MEMSIZE_32 | IMX_DMA_TYPE_LINEAR) << 2) |
CCR_REN;
imxdmac->ccr_to_device =
(IMX_DMA_MEMSIZE_32 | IMX_DMA_TYPE_LINEAR) |
((mode | IMX_DMA_TYPE_FIFO) << 2) | CCR_REN;
imx_dmav1_writel(imxdma, imxdmac->dma_request,
DMA_RSSR(imxdmac->channel));
/* Set burst length */
imx_dmav1_writel(imxdma, imxdmac->watermark_level *
imxdmac->word_size, DMA_BLR(imxdmac->channel));
return 0;
}
static enum dma_status imxdma_tx_status(struct dma_chan *chan,
......@@ -1184,7 +1177,8 @@ static int __init imxdma_probe(struct platform_device *pdev)
imxdma->dma_device.device_prep_dma_cyclic = imxdma_prep_dma_cyclic;
imxdma->dma_device.device_prep_dma_memcpy = imxdma_prep_dma_memcpy;
imxdma->dma_device.device_prep_interleaved_dma = imxdma_prep_dma_interleaved;
imxdma->dma_device.device_control = imxdma_control;
imxdma->dma_device.device_config = imxdma_config;
imxdma->dma_device.device_terminate_all = imxdma_terminate_all;
imxdma->dma_device.device_issue_pending = imxdma_issue_pending;
platform_set_drvdata(pdev, imxdma);
......
......@@ -830,20 +830,29 @@ static int sdma_load_context(struct sdma_channel *sdmac)
return ret;
}
static void sdma_disable_channel(struct sdma_channel *sdmac)
static struct sdma_channel *to_sdma_chan(struct dma_chan *chan)
{
return container_of(chan, struct sdma_channel, chan);
}
static int sdma_disable_channel(struct dma_chan *chan)
{
struct sdma_channel *sdmac = to_sdma_chan(chan);
struct sdma_engine *sdma = sdmac->sdma;
int channel = sdmac->channel;
writel_relaxed(BIT(channel), sdma->regs + SDMA_H_STATSTOP);
sdmac->status = DMA_ERROR;
return 0;
}
static int sdma_config_channel(struct sdma_channel *sdmac)
static int sdma_config_channel(struct dma_chan *chan)
{
struct sdma_channel *sdmac = to_sdma_chan(chan);
int ret;
sdma_disable_channel(sdmac);
sdma_disable_channel(chan);
sdmac->event_mask[0] = 0;
sdmac->event_mask[1] = 0;
......@@ -935,11 +944,6 @@ static int sdma_request_channel(struct sdma_channel *sdmac)
return ret;
}
static struct sdma_channel *to_sdma_chan(struct dma_chan *chan)
{
return container_of(chan, struct sdma_channel, chan);
}
static dma_cookie_t sdma_tx_submit(struct dma_async_tx_descriptor *tx)
{
unsigned long flags;
......@@ -1004,7 +1008,7 @@ static void sdma_free_chan_resources(struct dma_chan *chan)
struct sdma_channel *sdmac = to_sdma_chan(chan);
struct sdma_engine *sdma = sdmac->sdma;
sdma_disable_channel(sdmac);
sdma_disable_channel(chan);
if (sdmac->event_id0)
sdma_event_disable(sdmac, sdmac->event_id0);
......@@ -1203,35 +1207,24 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic(
return NULL;
}
static int sdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
static int sdma_config(struct dma_chan *chan,
struct dma_slave_config *dmaengine_cfg)
{
struct sdma_channel *sdmac = to_sdma_chan(chan);
struct dma_slave_config *dmaengine_cfg = (void *)arg;
switch (cmd) {
case DMA_TERMINATE_ALL:
sdma_disable_channel(sdmac);
return 0;
case DMA_SLAVE_CONFIG:
if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) {
sdmac->per_address = dmaengine_cfg->src_addr;
sdmac->watermark_level = dmaengine_cfg->src_maxburst *
dmaengine_cfg->src_addr_width;
sdmac->word_size = dmaengine_cfg->src_addr_width;
} else {
sdmac->per_address = dmaengine_cfg->dst_addr;
sdmac->watermark_level = dmaengine_cfg->dst_maxburst *
dmaengine_cfg->dst_addr_width;
sdmac->word_size = dmaengine_cfg->dst_addr_width;
}
sdmac->direction = dmaengine_cfg->direction;
return sdma_config_channel(sdmac);
default:
return -ENOSYS;
}
return -EINVAL;
if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) {
sdmac->per_address = dmaengine_cfg->src_addr;
sdmac->watermark_level = dmaengine_cfg->src_maxburst *
dmaengine_cfg->src_addr_width;
sdmac->word_size = dmaengine_cfg->src_addr_width;
} else {
sdmac->per_address = dmaengine_cfg->dst_addr;
sdmac->watermark_level = dmaengine_cfg->dst_maxburst *
dmaengine_cfg->dst_addr_width;
sdmac->word_size = dmaengine_cfg->dst_addr_width;
}
sdmac->direction = dmaengine_cfg->direction;
return sdma_config_channel(chan);
}
static enum dma_status sdma_tx_status(struct dma_chan *chan,
......@@ -1303,15 +1296,15 @@ static void sdma_load_firmware(const struct firmware *fw, void *context)
if (header->ram_code_start + header->ram_code_size > fw->size)
goto err_firmware;
switch (header->version_major) {
case 1:
sdma->script_number = SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V1;
break;
case 2:
sdma->script_number = SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V2;
break;
default:
dev_err(sdma->dev, "unknown firmware version\n");
goto err_firmware;
case 1:
sdma->script_number = SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V1;
break;
case 2:
sdma->script_number = SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V2;
break;
default:
dev_err(sdma->dev, "unknown firmware version\n");
goto err_firmware;
}
addr = (void *)header + header->script_addrs_start;
......@@ -1479,7 +1472,7 @@ static int sdma_probe(struct platform_device *pdev)
if (ret)
return ret;
sdma = kzalloc(sizeof(*sdma), GFP_KERNEL);
sdma = devm_kzalloc(&pdev->dev, sizeof(*sdma), GFP_KERNEL);
if (!sdma)
return -ENOMEM;
......@@ -1488,48 +1481,34 @@ static int sdma_probe(struct platform_device *pdev)
sdma->dev = &pdev->dev;
sdma->drvdata = drvdata;
iores = platform_get_resource(pdev, IORESOURCE_MEM, 0);
irq = platform_get_irq(pdev, 0);
if (!iores || irq < 0) {
ret = -EINVAL;
goto err_irq;
}
if (irq < 0)
return irq;
if (!request_mem_region(iores->start, resource_size(iores), pdev->name)) {
ret = -EBUSY;
goto err_request_region;
}
iores = platform_get_resource(pdev, IORESOURCE_MEM, 0);
sdma->regs = devm_ioremap_resource(&pdev->dev, iores);
if (IS_ERR(sdma->regs))
return PTR_ERR(sdma->regs);
sdma->clk_ipg = devm_clk_get(&pdev->dev, "ipg");
if (IS_ERR(sdma->clk_ipg)) {
ret = PTR_ERR(sdma->clk_ipg);
goto err_clk;
}
if (IS_ERR(sdma->clk_ipg))
return PTR_ERR(sdma->clk_ipg);
sdma->clk_ahb = devm_clk_get(&pdev->dev, "ahb");
if (IS_ERR(sdma->clk_ahb)) {
ret = PTR_ERR(sdma->clk_ahb);
goto err_clk;
}
if (IS_ERR(sdma->clk_ahb))
return PTR_ERR(sdma->clk_ahb);
clk_prepare(sdma->clk_ipg);
clk_prepare(sdma->clk_ahb);
sdma->regs = ioremap(iores->start, resource_size(iores));
if (!sdma->regs) {
ret = -ENOMEM;
goto err_ioremap;
}
ret = request_irq(irq, sdma_int_handler, 0, "sdma", sdma);
ret = devm_request_irq(&pdev->dev, irq, sdma_int_handler, 0, "sdma",
sdma);
if (ret)
goto err_request_irq;
return ret;
sdma->script_addrs = kzalloc(sizeof(*sdma->script_addrs), GFP_KERNEL);
if (!sdma->script_addrs) {
ret = -ENOMEM;
goto err_alloc;
}
if (!sdma->script_addrs)
return -ENOMEM;
/* initially no scripts available */
saddr_arr = (s32 *)sdma->script_addrs;
......@@ -1600,7 +1579,12 @@ static int sdma_probe(struct platform_device *pdev)
sdma->dma_device.device_tx_status = sdma_tx_status;
sdma->dma_device.device_prep_slave_sg = sdma_prep_slave_sg;
sdma->dma_device.device_prep_dma_cyclic = sdma_prep_dma_cyclic;
sdma->dma_device.device_control = sdma_control;
sdma->dma_device.device_config = sdma_config;
sdma->dma_device.device_terminate_all = sdma_disable_channel;
sdma->dma_device.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
sdma->dma_device.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
sdma->dma_device.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
sdma->dma_device.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
sdma->dma_device.device_issue_pending = sdma_issue_pending;
sdma->dma_device.dev->dma_parms = &sdma->dma_parms;
dma_set_max_seg_size(sdma->dma_device.dev, 65535);
......@@ -1629,38 +1613,22 @@ static int sdma_probe(struct platform_device *pdev)
dma_async_device_unregister(&sdma->dma_device);
err_init:
kfree(sdma->script_addrs);
err_alloc:
free_irq(irq, sdma);
err_request_irq:
iounmap(sdma->regs);
err_ioremap:
err_clk:
release_mem_region(iores->start, resource_size(iores));
err_request_region:
err_irq:
kfree(sdma);
return ret;
}
static int sdma_remove(struct platform_device *pdev)
{
struct sdma_engine *sdma = platform_get_drvdata(pdev);
struct resource *iores = platform_get_resource(pdev, IORESOURCE_MEM, 0);
int irq = platform_get_irq(pdev, 0);
int i;
dma_async_device_unregister(&sdma->dma_device);
kfree(sdma->script_addrs);
free_irq(irq, sdma);
iounmap(sdma->regs);
release_mem_region(iores->start, resource_size(iores));
/* Kill the tasklet */
for (i = 0; i < MAX_DMA_CHANNELS; i++) {
struct sdma_channel *sdmac = &sdma->channel[i];
tasklet_kill(&sdmac->tasklet);
}
kfree(sdma);
platform_set_drvdata(pdev, NULL);
dev_info(&pdev->dev, "Removed...\n");
......
......@@ -492,10 +492,10 @@ static enum dma_status intel_mid_dma_tx_status(struct dma_chan *chan,
return ret;
}
static int dma_slave_control(struct dma_chan *chan, unsigned long arg)
static int intel_mid_dma_config(struct dma_chan *chan,
struct dma_slave_config *slave)
{
struct intel_mid_dma_chan *midc = to_intel_mid_dma_chan(chan);
struct dma_slave_config *slave = (struct dma_slave_config *)arg;
struct intel_mid_dma_slave *mid_slave;
BUG_ON(!midc);
......@@ -509,28 +509,14 @@ static int dma_slave_control(struct dma_chan *chan, unsigned long arg)
midc->mid_slave = mid_slave;
return 0;
}
/**
* intel_mid_dma_device_control - DMA device control
* @chan: chan for DMA control
* @cmd: control cmd
* @arg: cmd arg value
*
* Perform DMA control command
*/
static int intel_mid_dma_device_control(struct dma_chan *chan,
enum dma_ctrl_cmd cmd, unsigned long arg)
static int intel_mid_dma_terminate_all(struct dma_chan *chan)
{
struct intel_mid_dma_chan *midc = to_intel_mid_dma_chan(chan);
struct middma_device *mid = to_middma_device(chan->device);
struct intel_mid_dma_desc *desc, *_desc;
union intel_mid_dma_cfg_lo cfg_lo;
if (cmd == DMA_SLAVE_CONFIG)
return dma_slave_control(chan, arg);
if (cmd != DMA_TERMINATE_ALL)
return -ENXIO;
spin_lock_bh(&midc->lock);
if (midc->busy == false) {
spin_unlock_bh(&midc->lock);
......@@ -1148,7 +1134,8 @@ static int mid_setup_dma(struct pci_dev *pdev)
dma->common.device_prep_dma_memcpy = intel_mid_dma_prep_memcpy;
dma->common.device_issue_pending = intel_mid_dma_issue_pending;
dma->common.device_prep_slave_sg = intel_mid_dma_prep_slave_sg;
dma->common.device_control = intel_mid_dma_device_control;
dma->common.device_config = intel_mid_dma_config;
dma->common.device_terminate_all = intel_mid_dma_terminate_all;
/*enable dma cntrl*/
iowrite32(REG_BIT0, dma->dma_base + DMA_CFG);
......
......@@ -214,6 +214,11 @@ static bool is_bwd_ioat(struct pci_dev *pdev)
case PCI_DEVICE_ID_INTEL_IOAT_BWD1:
case PCI_DEVICE_ID_INTEL_IOAT_BWD2:
case PCI_DEVICE_ID_INTEL_IOAT_BWD3:
/* even though not Atom, BDX-DE has same DMA silicon */
case PCI_DEVICE_ID_INTEL_IOAT_BDXDE0:
case PCI_DEVICE_ID_INTEL_IOAT_BDXDE1:
case PCI_DEVICE_ID_INTEL_IOAT_BDXDE2:
case PCI_DEVICE_ID_INTEL_IOAT_BDXDE3:
return true;
default:
return false;
......@@ -489,6 +494,7 @@ static void ioat3_eh(struct ioat2_dma_chan *ioat)
struct ioat_chan_common *chan = &ioat->base;
struct pci_dev *pdev = to_pdev(chan);
struct ioat_dma_descriptor *hw;
struct dma_async_tx_descriptor *tx;
u64 phys_complete;
struct ioat_ring_ent *desc;
u32 err_handled = 0;
......@@ -534,6 +540,16 @@ static void ioat3_eh(struct ioat2_dma_chan *ioat)
dev_err(to_dev(chan), "%s: fatal error (%x:%x)\n",
__func__, chanerr, err_handled);
BUG();
} else { /* cleanup the faulty descriptor */
tx = &desc->txd;
if (tx->cookie) {
dma_cookie_complete(tx);
dma_descriptor_unmap(tx);
if (tx->callback) {
tx->callback(tx->callback_param);
tx->callback = NULL;
}
}
}
writel(chanerr, chan->reg_base + IOAT_CHANERR_OFFSET);
......@@ -1300,7 +1316,8 @@ static int ioat_xor_val_self_test(struct ioatdma_device *device)
tmo = wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000));
if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_COMPLETE) {
if (tmo == 0 ||
dma->device_tx_status(dma_chan, cookie, NULL) != DMA_COMPLETE) {
dev_err(dev, "Self-test xor timed out\n");
err = -ENODEV;
goto dma_unmap;
......@@ -1366,7 +1383,8 @@ static int ioat_xor_val_self_test(struct ioatdma_device *device)
tmo = wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000));
if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_COMPLETE) {
if (tmo == 0 ||
dma->device_tx_status(dma_chan, cookie, NULL) != DMA_COMPLETE) {
dev_err(dev, "Self-test validate timed out\n");
err = -ENODEV;
goto dma_unmap;
......@@ -1418,7 +1436,8 @@ static int ioat_xor_val_self_test(struct ioatdma_device *device)
tmo = wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000));
if (dma->device_tx_status(dma_chan, cookie, NULL) != DMA_COMPLETE) {
if (tmo == 0 ||
dma->device_tx_status(dma_chan, cookie, NULL) != DMA_COMPLETE) {
dev_err(dev, "Self-test 2nd validate timed out\n");
err = -ENODEV;
goto dma_unmap;
......
......@@ -57,6 +57,11 @@
#define PCI_DEVICE_ID_INTEL_IOAT_BWD2 0x0C52
#define PCI_DEVICE_ID_INTEL_IOAT_BWD3 0x0C53
#define PCI_DEVICE_ID_INTEL_IOAT_BDXDE0 0x6f50
#define PCI_DEVICE_ID_INTEL_IOAT_BDXDE1 0x6f51
#define PCI_DEVICE_ID_INTEL_IOAT_BDXDE2 0x6f52
#define PCI_DEVICE_ID_INTEL_IOAT_BDXDE3 0x6f53
#define IOAT_VER_1_2 0x12 /* Version 1.2 */
#define IOAT_VER_2_0 0x20 /* Version 2.0 */
#define IOAT_VER_3_0 0x30 /* Version 3.0 */
......
......@@ -111,6 +111,11 @@ static struct pci_device_id ioat_pci_tbl[] = {
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BWD2) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BWD3) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BDXDE0) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BDXDE1) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BDXDE2) },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IOAT_BDXDE3) },
{ 0, }
};
MODULE_DEVICE_TABLE(pci, ioat_pci_tbl);
......
......@@ -1398,76 +1398,81 @@ static void idmac_issue_pending(struct dma_chan *chan)
*/
}
static int __idmac_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
static int idmac_pause(struct dma_chan *chan)
{
struct idmac_channel *ichan = to_idmac_chan(chan);
struct idmac *idmac = to_idmac(chan->device);
struct ipu *ipu = to_ipu(idmac);
struct list_head *list, *tmp;
unsigned long flags;
int i;
switch (cmd) {
case DMA_PAUSE:
spin_lock_irqsave(&ipu->lock, flags);
ipu_ic_disable_task(ipu, chan->chan_id);
mutex_lock(&ichan->chan_mutex);
/* Return all descriptors into "prepared" state */
list_for_each_safe(list, tmp, &ichan->queue)
list_del_init(list);
spin_lock_irqsave(&ipu->lock, flags);
ipu_ic_disable_task(ipu, chan->chan_id);
ichan->sg[0] = NULL;
ichan->sg[1] = NULL;
/* Return all descriptors into "prepared" state */
list_for_each_safe(list, tmp, &ichan->queue)
list_del_init(list);
spin_unlock_irqrestore(&ipu->lock, flags);
ichan->sg[0] = NULL;
ichan->sg[1] = NULL;
ichan->status = IPU_CHANNEL_INITIALIZED;
break;
case DMA_TERMINATE_ALL:
ipu_disable_channel(idmac, ichan,
ichan->status >= IPU_CHANNEL_ENABLED);
spin_unlock_irqrestore(&ipu->lock, flags);
tasklet_disable(&ipu->tasklet);
ichan->status = IPU_CHANNEL_INITIALIZED;
/* ichan->queue is modified in ISR, have to spinlock */
spin_lock_irqsave(&ichan->lock, flags);
list_splice_init(&ichan->queue, &ichan->free_list);
mutex_unlock(&ichan->chan_mutex);
if (ichan->desc)
for (i = 0; i < ichan->n_tx_desc; i++) {
struct idmac_tx_desc *desc = ichan->desc + i;
if (list_empty(&desc->list))
/* Descriptor was prepared, but not submitted */
list_add(&desc->list, &ichan->free_list);
return 0;
}
async_tx_clear_ack(&desc->txd);
}
static int __idmac_terminate_all(struct dma_chan *chan)
{
struct idmac_channel *ichan = to_idmac_chan(chan);
struct idmac *idmac = to_idmac(chan->device);
struct ipu *ipu = to_ipu(idmac);
unsigned long flags;
int i;
ichan->sg[0] = NULL;
ichan->sg[1] = NULL;
spin_unlock_irqrestore(&ichan->lock, flags);
ipu_disable_channel(idmac, ichan,
ichan->status >= IPU_CHANNEL_ENABLED);
tasklet_enable(&ipu->tasklet);
tasklet_disable(&ipu->tasklet);
ichan->status = IPU_CHANNEL_INITIALIZED;
break;
default:
return -ENOSYS;
}
/* ichan->queue is modified in ISR, have to spinlock */
spin_lock_irqsave(&ichan->lock, flags);
list_splice_init(&ichan->queue, &ichan->free_list);
if (ichan->desc)
for (i = 0; i < ichan->n_tx_desc; i++) {
struct idmac_tx_desc *desc = ichan->desc + i;
if (list_empty(&desc->list))
/* Descriptor was prepared, but not submitted */
list_add(&desc->list, &ichan->free_list);
async_tx_clear_ack(&desc->txd);
}
ichan->sg[0] = NULL;
ichan->sg[1] = NULL;
spin_unlock_irqrestore(&ichan->lock, flags);
tasklet_enable(&ipu->tasklet);
ichan->status = IPU_CHANNEL_INITIALIZED;
return 0;
}
static int idmac_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
static int idmac_terminate_all(struct dma_chan *chan)
{
struct idmac_channel *ichan = to_idmac_chan(chan);
int ret;
mutex_lock(&ichan->chan_mutex);
ret = __idmac_control(chan, cmd, arg);
ret = __idmac_terminate_all(chan);
mutex_unlock(&ichan->chan_mutex);
......@@ -1568,7 +1573,7 @@ static void idmac_free_chan_resources(struct dma_chan *chan)
mutex_lock(&ichan->chan_mutex);
__idmac_control(chan, DMA_TERMINATE_ALL, 0);
__idmac_terminate_all(chan);
if (ichan->status > IPU_CHANNEL_FREE) {
#ifdef DEBUG
......@@ -1622,7 +1627,8 @@ static int __init ipu_idmac_init(struct ipu *ipu)
/* Compulsory for DMA_SLAVE fields */
dma->device_prep_slave_sg = idmac_prep_slave_sg;
dma->device_control = idmac_control;
dma->device_pause = idmac_pause;
dma->device_terminate_all = idmac_terminate_all;
INIT_LIST_HEAD(&dma->channels);
for (i = 0; i < IPU_CHANNELS_NUM; i++) {
......@@ -1655,7 +1661,7 @@ static void ipu_idmac_exit(struct ipu *ipu)
for (i = 0; i < IPU_CHANNELS_NUM; i++) {
struct idmac_channel *ichan = ipu->channel + i;
idmac_control(&ichan->dma_chan, DMA_TERMINATE_ALL, 0);
idmac_terminate_all(&ichan->dma_chan);
}
dma_async_device_unregister(&idmac->dma);
......
......@@ -441,7 +441,7 @@ static struct dma_async_tx_descriptor *k3_dma_prep_memcpy(
num = 0;
if (!c->ccfg) {
/* default is memtomem, without calling device_control */
/* default is memtomem, without calling device_config */
c->ccfg = CX_CFG_SRCINCR | CX_CFG_DSTINCR | CX_CFG_EN;
c->ccfg |= (0xf << 20) | (0xf << 24); /* burst = 16 */
c->ccfg |= (0x3 << 12) | (0x3 << 16); /* width = 64 bit */
......@@ -523,112 +523,126 @@ static struct dma_async_tx_descriptor *k3_dma_prep_slave_sg(
return vchan_tx_prep(&c->vc, &ds->vd, flags);
}
static int k3_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
static int k3_dma_config(struct dma_chan *chan,
struct dma_slave_config *cfg)
{
struct k3_dma_chan *c = to_k3_chan(chan);
u32 maxburst = 0, val = 0;
enum dma_slave_buswidth width = DMA_SLAVE_BUSWIDTH_UNDEFINED;
if (cfg == NULL)
return -EINVAL;
c->dir = cfg->direction;
if (c->dir == DMA_DEV_TO_MEM) {
c->ccfg = CX_CFG_DSTINCR;
c->dev_addr = cfg->src_addr;
maxburst = cfg->src_maxburst;
width = cfg->src_addr_width;
} else if (c->dir == DMA_MEM_TO_DEV) {
c->ccfg = CX_CFG_SRCINCR;
c->dev_addr = cfg->dst_addr;
maxburst = cfg->dst_maxburst;
width = cfg->dst_addr_width;
}
switch (width) {
case DMA_SLAVE_BUSWIDTH_1_BYTE:
case DMA_SLAVE_BUSWIDTH_2_BYTES:
case DMA_SLAVE_BUSWIDTH_4_BYTES:
case DMA_SLAVE_BUSWIDTH_8_BYTES:
val = __ffs(width);
break;
default:
val = 3;
break;
}
c->ccfg |= (val << 12) | (val << 16);
if ((maxburst == 0) || (maxburst > 16))
val = 16;
else
val = maxburst - 1;
c->ccfg |= (val << 20) | (val << 24);
c->ccfg |= CX_CFG_MEM2PER | CX_CFG_EN;
/* specific request line */
c->ccfg |= c->vc.chan.chan_id << 4;
return 0;
}
static int k3_dma_terminate_all(struct dma_chan *chan)
{
struct k3_dma_chan *c = to_k3_chan(chan);
struct k3_dma_dev *d = to_k3_dma(chan->device);
struct dma_slave_config *cfg = (void *)arg;
struct k3_dma_phy *p = c->phy;
unsigned long flags;
u32 maxburst = 0, val = 0;
enum dma_slave_buswidth width = DMA_SLAVE_BUSWIDTH_UNDEFINED;
LIST_HEAD(head);
switch (cmd) {
case DMA_SLAVE_CONFIG:
if (cfg == NULL)
return -EINVAL;
c->dir = cfg->direction;
if (c->dir == DMA_DEV_TO_MEM) {
c->ccfg = CX_CFG_DSTINCR;
c->dev_addr = cfg->src_addr;
maxburst = cfg->src_maxburst;
width = cfg->src_addr_width;
} else if (c->dir == DMA_MEM_TO_DEV) {
c->ccfg = CX_CFG_SRCINCR;
c->dev_addr = cfg->dst_addr;
maxburst = cfg->dst_maxburst;
width = cfg->dst_addr_width;
}
switch (width) {
case DMA_SLAVE_BUSWIDTH_1_BYTE:
case DMA_SLAVE_BUSWIDTH_2_BYTES:
case DMA_SLAVE_BUSWIDTH_4_BYTES:
case DMA_SLAVE_BUSWIDTH_8_BYTES:
val = __ffs(width);
break;
default:
val = 3;
break;
}
c->ccfg |= (val << 12) | (val << 16);
dev_dbg(d->slave.dev, "vchan %p: terminate all\n", &c->vc);
if ((maxburst == 0) || (maxburst > 16))
val = 16;
else
val = maxburst - 1;
c->ccfg |= (val << 20) | (val << 24);
c->ccfg |= CX_CFG_MEM2PER | CX_CFG_EN;
/* Prevent this channel being scheduled */
spin_lock(&d->lock);
list_del_init(&c->node);
spin_unlock(&d->lock);
/* specific request line */
c->ccfg |= c->vc.chan.chan_id << 4;
break;
/* Clear the tx descriptor lists */
spin_lock_irqsave(&c->vc.lock, flags);
vchan_get_all_descriptors(&c->vc, &head);
if (p) {
/* vchan is assigned to a pchan - stop the channel */
k3_dma_terminate_chan(p, d);
c->phy = NULL;
p->vchan = NULL;
p->ds_run = p->ds_done = NULL;
}
spin_unlock_irqrestore(&c->vc.lock, flags);
vchan_dma_desc_free_list(&c->vc, &head);
case DMA_TERMINATE_ALL:
dev_dbg(d->slave.dev, "vchan %p: terminate all\n", &c->vc);
return 0;
}
/* Prevent this channel being scheduled */
spin_lock(&d->lock);
list_del_init(&c->node);
spin_unlock(&d->lock);
static int k3_dma_transfer_pause(struct dma_chan *chan)
{
struct k3_dma_chan *c = to_k3_chan(chan);
struct k3_dma_dev *d = to_k3_dma(chan->device);
struct k3_dma_phy *p = c->phy;
/* Clear the tx descriptor lists */
spin_lock_irqsave(&c->vc.lock, flags);
vchan_get_all_descriptors(&c->vc, &head);
dev_dbg(d->slave.dev, "vchan %p: pause\n", &c->vc);
if (c->status == DMA_IN_PROGRESS) {
c->status = DMA_PAUSED;
if (p) {
/* vchan is assigned to a pchan - stop the channel */
k3_dma_terminate_chan(p, d);
c->phy = NULL;
p->vchan = NULL;
p->ds_run = p->ds_done = NULL;
k3_dma_pause_dma(p, false);
} else {
spin_lock(&d->lock);
list_del_init(&c->node);
spin_unlock(&d->lock);
}
spin_unlock_irqrestore(&c->vc.lock, flags);
vchan_dma_desc_free_list(&c->vc, &head);
break;
}
case DMA_PAUSE:
dev_dbg(d->slave.dev, "vchan %p: pause\n", &c->vc);
if (c->status == DMA_IN_PROGRESS) {
c->status = DMA_PAUSED;
if (p) {
k3_dma_pause_dma(p, false);
} else {
spin_lock(&d->lock);
list_del_init(&c->node);
spin_unlock(&d->lock);
}
}
break;
return 0;
}
case DMA_RESUME:
dev_dbg(d->slave.dev, "vchan %p: resume\n", &c->vc);
spin_lock_irqsave(&c->vc.lock, flags);
if (c->status == DMA_PAUSED) {
c->status = DMA_IN_PROGRESS;
if (p) {
k3_dma_pause_dma(p, true);
} else if (!list_empty(&c->vc.desc_issued)) {
spin_lock(&d->lock);
list_add_tail(&c->node, &d->chan_pending);
spin_unlock(&d->lock);
}
static int k3_dma_transfer_resume(struct dma_chan *chan)
{
struct k3_dma_chan *c = to_k3_chan(chan);
struct k3_dma_dev *d = to_k3_dma(chan->device);
struct k3_dma_phy *p = c->phy;
unsigned long flags;
dev_dbg(d->slave.dev, "vchan %p: resume\n", &c->vc);
spin_lock_irqsave(&c->vc.lock, flags);
if (c->status == DMA_PAUSED) {
c->status = DMA_IN_PROGRESS;
if (p) {
k3_dma_pause_dma(p, true);
} else if (!list_empty(&c->vc.desc_issued)) {
spin_lock(&d->lock);
list_add_tail(&c->node, &d->chan_pending);
spin_unlock(&d->lock);
}
spin_unlock_irqrestore(&c->vc.lock, flags);
break;
default:
return -ENXIO;
}
spin_unlock_irqrestore(&c->vc.lock, flags);
return 0;
}
......@@ -720,7 +734,10 @@ static int k3_dma_probe(struct platform_device *op)
d->slave.device_prep_dma_memcpy = k3_dma_prep_memcpy;
d->slave.device_prep_slave_sg = k3_dma_prep_slave_sg;
d->slave.device_issue_pending = k3_dma_issue_pending;
d->slave.device_control = k3_dma_control;
d->slave.device_config = k3_dma_config;
d->slave.device_pause = k3_dma_transfer_pause;
d->slave.device_resume = k3_dma_transfer_resume;
d->slave.device_terminate_all = k3_dma_terminate_all;
d->slave.copy_align = DMA_ALIGN;
/* init virtual channel */
......@@ -787,7 +804,7 @@ static int k3_dma_remove(struct platform_device *op)
}
#ifdef CONFIG_PM_SLEEP
static int k3_dma_suspend(struct device *dev)
static int k3_dma_suspend_dev(struct device *dev)
{
struct k3_dma_dev *d = dev_get_drvdata(dev);
u32 stat = 0;
......@@ -803,7 +820,7 @@ static int k3_dma_suspend(struct device *dev)
return 0;
}
static int k3_dma_resume(struct device *dev)
static int k3_dma_resume_dev(struct device *dev)
{
struct k3_dma_dev *d = dev_get_drvdata(dev);
int ret = 0;
......@@ -818,7 +835,7 @@ static int k3_dma_resume(struct device *dev)
}
#endif
static SIMPLE_DEV_PM_OPS(k3_dma_pmops, k3_dma_suspend, k3_dma_resume);
static SIMPLE_DEV_PM_OPS(k3_dma_pmops, k3_dma_suspend_dev, k3_dma_resume_dev);
static struct platform_driver k3_pdma_driver = {
.driver = {
......
......@@ -683,68 +683,70 @@ mmp_pdma_prep_dma_cyclic(struct dma_chan *dchan,
return NULL;
}
static int mmp_pdma_control(struct dma_chan *dchan, enum dma_ctrl_cmd cmd,
unsigned long arg)
static int mmp_pdma_config(struct dma_chan *dchan,
struct dma_slave_config *cfg)
{
struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan);
struct dma_slave_config *cfg = (void *)arg;
unsigned long flags;
u32 maxburst = 0, addr = 0;
enum dma_slave_buswidth width = DMA_SLAVE_BUSWIDTH_UNDEFINED;
if (!dchan)
return -EINVAL;
switch (cmd) {
case DMA_TERMINATE_ALL:
disable_chan(chan->phy);
mmp_pdma_free_phy(chan);
spin_lock_irqsave(&chan->desc_lock, flags);
mmp_pdma_free_desc_list(chan, &chan->chain_pending);
mmp_pdma_free_desc_list(chan, &chan->chain_running);
spin_unlock_irqrestore(&chan->desc_lock, flags);
chan->idle = true;
break;
case DMA_SLAVE_CONFIG:
if (cfg->direction == DMA_DEV_TO_MEM) {
chan->dcmd = DCMD_INCTRGADDR | DCMD_FLOWSRC;
maxburst = cfg->src_maxburst;
width = cfg->src_addr_width;
addr = cfg->src_addr;
} else if (cfg->direction == DMA_MEM_TO_DEV) {
chan->dcmd = DCMD_INCSRCADDR | DCMD_FLOWTRG;
maxburst = cfg->dst_maxburst;
width = cfg->dst_addr_width;
addr = cfg->dst_addr;
}
if (width == DMA_SLAVE_BUSWIDTH_1_BYTE)
chan->dcmd |= DCMD_WIDTH1;
else if (width == DMA_SLAVE_BUSWIDTH_2_BYTES)
chan->dcmd |= DCMD_WIDTH2;
else if (width == DMA_SLAVE_BUSWIDTH_4_BYTES)
chan->dcmd |= DCMD_WIDTH4;
if (maxburst == 8)
chan->dcmd |= DCMD_BURST8;
else if (maxburst == 16)
chan->dcmd |= DCMD_BURST16;
else if (maxburst == 32)
chan->dcmd |= DCMD_BURST32;
chan->dir = cfg->direction;
chan->dev_addr = addr;
/* FIXME: drivers should be ported over to use the filter
* function. Once that's done, the following two lines can
* be removed.
*/
if (cfg->slave_id)
chan->drcmr = cfg->slave_id;
break;
default:
return -ENOSYS;
if (cfg->direction == DMA_DEV_TO_MEM) {
chan->dcmd = DCMD_INCTRGADDR | DCMD_FLOWSRC;
maxburst = cfg->src_maxburst;
width = cfg->src_addr_width;
addr = cfg->src_addr;
} else if (cfg->direction == DMA_MEM_TO_DEV) {
chan->dcmd = DCMD_INCSRCADDR | DCMD_FLOWTRG;
maxburst = cfg->dst_maxburst;
width = cfg->dst_addr_width;
addr = cfg->dst_addr;
}
if (width == DMA_SLAVE_BUSWIDTH_1_BYTE)
chan->dcmd |= DCMD_WIDTH1;
else if (width == DMA_SLAVE_BUSWIDTH_2_BYTES)
chan->dcmd |= DCMD_WIDTH2;
else if (width == DMA_SLAVE_BUSWIDTH_4_BYTES)
chan->dcmd |= DCMD_WIDTH4;
if (maxburst == 8)
chan->dcmd |= DCMD_BURST8;
else if (maxburst == 16)
chan->dcmd |= DCMD_BURST16;
else if (maxburst == 32)
chan->dcmd |= DCMD_BURST32;
chan->dir = cfg->direction;
chan->dev_addr = addr;
/* FIXME: drivers should be ported over to use the filter
* function. Once that's done, the following two lines can
* be removed.
*/
if (cfg->slave_id)
chan->drcmr = cfg->slave_id;
return 0;
}
static int mmp_pdma_terminate_all(struct dma_chan *dchan)
{
struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan);
unsigned long flags;
if (!dchan)
return -EINVAL;
disable_chan(chan->phy);
mmp_pdma_free_phy(chan);
spin_lock_irqsave(&chan->desc_lock, flags);
mmp_pdma_free_desc_list(chan, &chan->chain_pending);
mmp_pdma_free_desc_list(chan, &chan->chain_running);
spin_unlock_irqrestore(&chan->desc_lock, flags);
chan->idle = true;
return 0;
}
......@@ -1061,7 +1063,8 @@ static int mmp_pdma_probe(struct platform_device *op)
pdev->device.device_prep_slave_sg = mmp_pdma_prep_slave_sg;
pdev->device.device_prep_dma_cyclic = mmp_pdma_prep_dma_cyclic;
pdev->device.device_issue_pending = mmp_pdma_issue_pending;
pdev->device.device_control = mmp_pdma_control;
pdev->device.device_config = mmp_pdma_config;
pdev->device.device_terminate_all = mmp_pdma_terminate_all;
pdev->device.copy_align = PDMA_ALIGNMENT;
if (pdev->dev->coherent_dma_mask)
......
......@@ -19,7 +19,6 @@
#include <linux/dmaengine.h>
#include <linux/platform_device.h>
#include <linux/device.h>
#include <mach/regs-icu.h>
#include <linux/platform_data/dma-mmp_tdma.h>
#include <linux/of_device.h>
#include <linux/of_dma.h>
......@@ -164,33 +163,46 @@ static void mmp_tdma_enable_chan(struct mmp_tdma_chan *tdmac)
tdmac->status = DMA_IN_PROGRESS;
}
static void mmp_tdma_disable_chan(struct mmp_tdma_chan *tdmac)
static int mmp_tdma_disable_chan(struct dma_chan *chan)
{
struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan);
writel(readl(tdmac->reg_base + TDCR) & ~TDCR_CHANEN,
tdmac->reg_base + TDCR);
tdmac->status = DMA_COMPLETE;
return 0;
}
static void mmp_tdma_resume_chan(struct mmp_tdma_chan *tdmac)
static int mmp_tdma_resume_chan(struct dma_chan *chan)
{
struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan);
writel(readl(tdmac->reg_base + TDCR) | TDCR_CHANEN,
tdmac->reg_base + TDCR);
tdmac->status = DMA_IN_PROGRESS;
return 0;
}
static void mmp_tdma_pause_chan(struct mmp_tdma_chan *tdmac)
static int mmp_tdma_pause_chan(struct dma_chan *chan)
{
struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan);
writel(readl(tdmac->reg_base + TDCR) & ~TDCR_CHANEN,
tdmac->reg_base + TDCR);
tdmac->status = DMA_PAUSED;
return 0;
}
static int mmp_tdma_config_chan(struct mmp_tdma_chan *tdmac)
static int mmp_tdma_config_chan(struct dma_chan *chan)
{
struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan);
unsigned int tdcr = 0;
mmp_tdma_disable_chan(tdmac);
mmp_tdma_disable_chan(chan);
if (tdmac->dir == DMA_MEM_TO_DEV)
tdcr = TDCR_DSTDIR_ADDR_HOLD | TDCR_SRCDIR_ADDR_INC;
......@@ -452,42 +464,34 @@ static struct dma_async_tx_descriptor *mmp_tdma_prep_dma_cyclic(
return NULL;
}
static int mmp_tdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
static int mmp_tdma_terminate_all(struct dma_chan *chan)
{
struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan);
struct dma_slave_config *dmaengine_cfg = (void *)arg;
int ret = 0;
switch (cmd) {
case DMA_TERMINATE_ALL:
mmp_tdma_disable_chan(tdmac);
/* disable interrupt */
mmp_tdma_enable_irq(tdmac, false);
break;
case DMA_PAUSE:
mmp_tdma_pause_chan(tdmac);
break;
case DMA_RESUME:
mmp_tdma_resume_chan(tdmac);
break;
case DMA_SLAVE_CONFIG:
if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) {
tdmac->dev_addr = dmaengine_cfg->src_addr;
tdmac->burst_sz = dmaengine_cfg->src_maxburst;
tdmac->buswidth = dmaengine_cfg->src_addr_width;
} else {
tdmac->dev_addr = dmaengine_cfg->dst_addr;
tdmac->burst_sz = dmaengine_cfg->dst_maxburst;
tdmac->buswidth = dmaengine_cfg->dst_addr_width;
}
tdmac->dir = dmaengine_cfg->direction;
return mmp_tdma_config_chan(tdmac);
default:
ret = -ENOSYS;
mmp_tdma_disable_chan(chan);
/* disable interrupt */
mmp_tdma_enable_irq(tdmac, false);
return 0;
}
static int mmp_tdma_config(struct dma_chan *chan,
struct dma_slave_config *dmaengine_cfg)
{
struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan);
if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) {
tdmac->dev_addr = dmaengine_cfg->src_addr;
tdmac->burst_sz = dmaengine_cfg->src_maxburst;
tdmac->buswidth = dmaengine_cfg->src_addr_width;
} else {
tdmac->dev_addr = dmaengine_cfg->dst_addr;
tdmac->burst_sz = dmaengine_cfg->dst_maxburst;
tdmac->buswidth = dmaengine_cfg->dst_addr_width;
}
tdmac->dir = dmaengine_cfg->direction;
return ret;
return mmp_tdma_config_chan(chan);
}
static enum dma_status mmp_tdma_tx_status(struct dma_chan *chan,
......@@ -668,7 +672,10 @@ static int mmp_tdma_probe(struct platform_device *pdev)
tdev->device.device_prep_dma_cyclic = mmp_tdma_prep_dma_cyclic;
tdev->device.device_tx_status = mmp_tdma_tx_status;
tdev->device.device_issue_pending = mmp_tdma_issue_pending;
tdev->device.device_control = mmp_tdma_control;
tdev->device.device_config = mmp_tdma_config;
tdev->device.device_pause = mmp_tdma_pause_chan;
tdev->device.device_resume = mmp_tdma_resume_chan;
tdev->device.device_terminate_all = mmp_tdma_terminate_all;
tdev->device.copy_align = TDMA_ALIGNMENT;
dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
......
......@@ -263,28 +263,6 @@ static int moxart_slave_config(struct dma_chan *chan,
return 0;
}
static int moxart_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
{
int ret = 0;
switch (cmd) {
case DMA_PAUSE:
case DMA_RESUME:
return -EINVAL;
case DMA_TERMINATE_ALL:
moxart_terminate_all(chan);
break;
case DMA_SLAVE_CONFIG:
ret = moxart_slave_config(chan, (struct dma_slave_config *)arg);
break;
default:
ret = -ENOSYS;
}
return ret;
}
static struct dma_async_tx_descriptor *moxart_prep_slave_sg(
struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_transfer_direction dir,
......@@ -531,7 +509,8 @@ static void moxart_dma_init(struct dma_device *dma, struct device *dev)
dma->device_free_chan_resources = moxart_free_chan_resources;
dma->device_issue_pending = moxart_issue_pending;
dma->device_tx_status = moxart_tx_status;
dma->device_control = moxart_control;
dma->device_config = moxart_slave_config;
dma->device_terminate_all = moxart_terminate_all;
dma->dev = dev;
INIT_LIST_HEAD(&dma->channels);
......
......@@ -800,79 +800,69 @@ mpc_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
return NULL;
}
static int mpc_dma_device_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
static int mpc_dma_device_config(struct dma_chan *chan,
struct dma_slave_config *cfg)
{
struct mpc_dma_chan *mchan;
struct mpc_dma *mdma;
struct dma_slave_config *cfg;
struct mpc_dma_chan *mchan = dma_chan_to_mpc_dma_chan(chan);
unsigned long flags;
mchan = dma_chan_to_mpc_dma_chan(chan);
switch (cmd) {
case DMA_TERMINATE_ALL:
/* Disable channel requests */
mdma = dma_chan_to_mpc_dma(chan);
spin_lock_irqsave(&mchan->lock, flags);
out_8(&mdma->regs->dmacerq, chan->chan_id);
list_splice_tail_init(&mchan->prepared, &mchan->free);
list_splice_tail_init(&mchan->queued, &mchan->free);
list_splice_tail_init(&mchan->active, &mchan->free);
spin_unlock_irqrestore(&mchan->lock, flags);
/*
* Software constraints:
* - only transfers between a peripheral device and
* memory are supported;
* - only peripheral devices with 4-byte FIFO access register
* are supported;
* - minimal transfer chunk is 4 bytes and consequently
* source and destination addresses must be 4-byte aligned
* and transfer size must be aligned on (4 * maxburst)
* boundary;
* - during the transfer RAM address is being incremented by
* the size of minimal transfer chunk;
* - peripheral port's address is constant during the transfer.
*/
return 0;
if (cfg->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||
cfg->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||
!IS_ALIGNED(cfg->src_addr, 4) ||
!IS_ALIGNED(cfg->dst_addr, 4)) {
return -EINVAL;
}
case DMA_SLAVE_CONFIG:
/*
* Software constraints:
* - only transfers between a peripheral device and
* memory are supported;
* - only peripheral devices with 4-byte FIFO access register
* are supported;
* - minimal transfer chunk is 4 bytes and consequently
* source and destination addresses must be 4-byte aligned
* and transfer size must be aligned on (4 * maxburst)
* boundary;
* - during the transfer RAM address is being incremented by
* the size of minimal transfer chunk;
* - peripheral port's address is constant during the transfer.
*/
spin_lock_irqsave(&mchan->lock, flags);
cfg = (void *)arg;
mchan->src_per_paddr = cfg->src_addr;
mchan->src_tcd_nunits = cfg->src_maxburst;
mchan->dst_per_paddr = cfg->dst_addr;
mchan->dst_tcd_nunits = cfg->dst_maxburst;
if (cfg->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||
cfg->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||
!IS_ALIGNED(cfg->src_addr, 4) ||
!IS_ALIGNED(cfg->dst_addr, 4)) {
return -EINVAL;
}
/* Apply defaults */
if (mchan->src_tcd_nunits == 0)
mchan->src_tcd_nunits = 1;
if (mchan->dst_tcd_nunits == 0)
mchan->dst_tcd_nunits = 1;
spin_lock_irqsave(&mchan->lock, flags);
spin_unlock_irqrestore(&mchan->lock, flags);
mchan->src_per_paddr = cfg->src_addr;
mchan->src_tcd_nunits = cfg->src_maxburst;
mchan->dst_per_paddr = cfg->dst_addr;
mchan->dst_tcd_nunits = cfg->dst_maxburst;
return 0;
}
/* Apply defaults */
if (mchan->src_tcd_nunits == 0)
mchan->src_tcd_nunits = 1;
if (mchan->dst_tcd_nunits == 0)
mchan->dst_tcd_nunits = 1;
static int mpc_dma_device_terminate_all(struct dma_chan *chan)
{
struct mpc_dma_chan *mchan = dma_chan_to_mpc_dma_chan(chan);
struct mpc_dma *mdma = dma_chan_to_mpc_dma(chan);
unsigned long flags;
spin_unlock_irqrestore(&mchan->lock, flags);
/* Disable channel requests */
spin_lock_irqsave(&mchan->lock, flags);
return 0;
out_8(&mdma->regs->dmacerq, chan->chan_id);
list_splice_tail_init(&mchan->prepared, &mchan->free);
list_splice_tail_init(&mchan->queued, &mchan->free);
list_splice_tail_init(&mchan->active, &mchan->free);
default:
/* Unknown command */
break;
}
spin_unlock_irqrestore(&mchan->lock, flags);
return -ENXIO;
return 0;
}
static int mpc_dma_probe(struct platform_device *op)
......@@ -963,7 +953,8 @@ static int mpc_dma_probe(struct platform_device *op)
dma->device_tx_status = mpc_dma_tx_status;
dma->device_prep_dma_memcpy = mpc_dma_prep_memcpy;
dma->device_prep_slave_sg = mpc_dma_prep_slave_sg;
dma->device_control = mpc_dma_device_control;
dma->device_config = mpc_dma_device_config;
dma->device_terminate_all = mpc_dma_device_terminate_all;
INIT_LIST_HEAD(&dma->channels);
dma_cap_set(DMA_MEMCPY, dma->cap_mask);
......
......@@ -928,14 +928,6 @@ mv_xor_xor_self_test(struct mv_xor_chan *mv_chan)
return err;
}
/* This driver does not implement any of the optional DMA operations. */
static int
mv_xor_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
{
return -ENOSYS;
}
static int mv_xor_channel_remove(struct mv_xor_chan *mv_chan)
{
struct dma_chan *chan, *_chan;
......@@ -1008,7 +1000,6 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
dma_dev->device_free_chan_resources = mv_xor_free_chan_resources;
dma_dev->device_tx_status = mv_xor_status;
dma_dev->device_issue_pending = mv_xor_issue_pending;
dma_dev->device_control = mv_xor_control;
dma_dev->dev = &pdev->dev;
/* set prep routines based on capability */
......
......@@ -202,8 +202,9 @@ static struct mxs_dma_chan *to_mxs_dma_chan(struct dma_chan *chan)
return container_of(chan, struct mxs_dma_chan, chan);
}
static void mxs_dma_reset_chan(struct mxs_dma_chan *mxs_chan)
static void mxs_dma_reset_chan(struct dma_chan *chan)
{
struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma;
int chan_id = mxs_chan->chan.chan_id;
......@@ -250,8 +251,9 @@ static void mxs_dma_reset_chan(struct mxs_dma_chan *mxs_chan)
mxs_chan->status = DMA_COMPLETE;
}
static void mxs_dma_enable_chan(struct mxs_dma_chan *mxs_chan)
static void mxs_dma_enable_chan(struct dma_chan *chan)
{
struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma;
int chan_id = mxs_chan->chan.chan_id;
......@@ -272,13 +274,16 @@ static void mxs_dma_enable_chan(struct mxs_dma_chan *mxs_chan)
mxs_chan->reset = false;
}
static void mxs_dma_disable_chan(struct mxs_dma_chan *mxs_chan)
static void mxs_dma_disable_chan(struct dma_chan *chan)
{
struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
mxs_chan->status = DMA_COMPLETE;
}
static void mxs_dma_pause_chan(struct mxs_dma_chan *mxs_chan)
static int mxs_dma_pause_chan(struct dma_chan *chan)
{
struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma;
int chan_id = mxs_chan->chan.chan_id;
......@@ -291,10 +296,12 @@ static void mxs_dma_pause_chan(struct mxs_dma_chan *mxs_chan)
mxs_dma->base + HW_APBHX_CHANNEL_CTRL + STMP_OFFSET_REG_SET);
mxs_chan->status = DMA_PAUSED;
return 0;
}
static void mxs_dma_resume_chan(struct mxs_dma_chan *mxs_chan)
static int mxs_dma_resume_chan(struct dma_chan *chan)
{
struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma;
int chan_id = mxs_chan->chan.chan_id;
......@@ -307,6 +314,7 @@ static void mxs_dma_resume_chan(struct mxs_dma_chan *mxs_chan)
mxs_dma->base + HW_APBHX_CHANNEL_CTRL + STMP_OFFSET_REG_CLR);
mxs_chan->status = DMA_IN_PROGRESS;
return 0;
}
static dma_cookie_t mxs_dma_tx_submit(struct dma_async_tx_descriptor *tx)
......@@ -383,7 +391,7 @@ static irqreturn_t mxs_dma_int_handler(int irq, void *dev_id)
"%s: error in channel %d\n", __func__,
chan);
mxs_chan->status = DMA_ERROR;
mxs_dma_reset_chan(mxs_chan);
mxs_dma_reset_chan(&mxs_chan->chan);
} else if (mxs_chan->status != DMA_COMPLETE) {
if (mxs_chan->flags & MXS_DMA_SG_LOOP) {
mxs_chan->status = DMA_IN_PROGRESS;
......@@ -432,7 +440,7 @@ static int mxs_dma_alloc_chan_resources(struct dma_chan *chan)
if (ret)
goto err_clk;
mxs_dma_reset_chan(mxs_chan);
mxs_dma_reset_chan(chan);
dma_async_tx_descriptor_init(&mxs_chan->desc, chan);
mxs_chan->desc.tx_submit = mxs_dma_tx_submit;
......@@ -456,7 +464,7 @@ static void mxs_dma_free_chan_resources(struct dma_chan *chan)
struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma;
mxs_dma_disable_chan(mxs_chan);
mxs_dma_disable_chan(chan);
free_irq(mxs_chan->chan_irq, mxs_dma);
......@@ -651,28 +659,12 @@ static struct dma_async_tx_descriptor *mxs_dma_prep_dma_cyclic(
return NULL;
}
static int mxs_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
static int mxs_dma_terminate_all(struct dma_chan *chan)
{
struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
int ret = 0;
switch (cmd) {
case DMA_TERMINATE_ALL:
mxs_dma_reset_chan(mxs_chan);
mxs_dma_disable_chan(mxs_chan);
break;
case DMA_PAUSE:
mxs_dma_pause_chan(mxs_chan);
break;
case DMA_RESUME:
mxs_dma_resume_chan(mxs_chan);
break;
default:
ret = -ENOSYS;
}
mxs_dma_reset_chan(chan);
mxs_dma_disable_chan(chan);
return ret;
return 0;
}
static enum dma_status mxs_dma_tx_status(struct dma_chan *chan,
......@@ -701,13 +693,6 @@ static enum dma_status mxs_dma_tx_status(struct dma_chan *chan,
return mxs_chan->status;
}
static void mxs_dma_issue_pending(struct dma_chan *chan)
{
struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
mxs_dma_enable_chan(mxs_chan);
}
static int __init mxs_dma_init(struct mxs_dma_engine *mxs_dma)
{
int ret;
......@@ -860,8 +845,14 @@ static int __init mxs_dma_probe(struct platform_device *pdev)
mxs_dma->dma_device.device_tx_status = mxs_dma_tx_status;
mxs_dma->dma_device.device_prep_slave_sg = mxs_dma_prep_slave_sg;
mxs_dma->dma_device.device_prep_dma_cyclic = mxs_dma_prep_dma_cyclic;
mxs_dma->dma_device.device_control = mxs_dma_control;
mxs_dma->dma_device.device_issue_pending = mxs_dma_issue_pending;
mxs_dma->dma_device.device_pause = mxs_dma_pause_chan;
mxs_dma->dma_device.device_resume = mxs_dma_resume_chan;
mxs_dma->dma_device.device_terminate_all = mxs_dma_terminate_all;
mxs_dma->dma_device.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
mxs_dma->dma_device.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
mxs_dma->dma_device.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
mxs_dma->dma_device.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
mxs_dma->dma_device.device_issue_pending = mxs_dma_enable_chan;
ret = dma_async_device_register(&mxs_dma->dma_device);
if (ret) {
......
......@@ -504,7 +504,7 @@ static int nbpf_prep_one(struct nbpf_link_desc *ldesc,
* pauses DMA and reads out data received via DMA as well as those left
* in the Rx FIFO. For this to work with the RAM side using burst
* transfers we enable the SBE bit and terminate the transfer in our
* DMA_PAUSE handler.
* .device_pause handler.
*/
mem_xfer = nbpf_xfer_ds(chan->nbpf, size);
......@@ -565,13 +565,6 @@ static void nbpf_configure(struct nbpf_device *nbpf)
nbpf_write(nbpf, NBPF_CTRL, NBPF_CTRL_LVINT);
}
static void nbpf_pause(struct nbpf_channel *chan)
{
nbpf_chan_write(chan, NBPF_CHAN_CTRL, NBPF_CHAN_CTRL_SETSUS);
/* See comment in nbpf_prep_one() */
nbpf_chan_write(chan, NBPF_CHAN_CTRL, NBPF_CHAN_CTRL_CLREN);
}
/* Generic part */
/* DMA ENGINE functions */
......@@ -837,54 +830,58 @@ static void nbpf_chan_idle(struct nbpf_channel *chan)
}
}
static int nbpf_control(struct dma_chan *dchan, enum dma_ctrl_cmd cmd,
unsigned long arg)
static int nbpf_pause(struct dma_chan *dchan)
{
struct nbpf_channel *chan = nbpf_to_chan(dchan);
struct dma_slave_config *config;
dev_dbg(dchan->device->dev, "Entry %s(%d)\n", __func__, cmd);
dev_dbg(dchan->device->dev, "Entry %s\n", __func__);
switch (cmd) {
case DMA_TERMINATE_ALL:
dev_dbg(dchan->device->dev, "Terminating\n");
nbpf_chan_halt(chan);
nbpf_chan_idle(chan);
break;
chan->paused = true;
nbpf_chan_write(chan, NBPF_CHAN_CTRL, NBPF_CHAN_CTRL_SETSUS);
/* See comment in nbpf_prep_one() */
nbpf_chan_write(chan, NBPF_CHAN_CTRL, NBPF_CHAN_CTRL_CLREN);
case DMA_SLAVE_CONFIG:
if (!arg)
return -EINVAL;
config = (struct dma_slave_config *)arg;
return 0;
}
/*
* We could check config->slave_id to match chan->terminal here,
* but with DT they would be coming from the same source, so
* such a check would be superflous
*/
static int nbpf_terminate_all(struct dma_chan *dchan)
{
struct nbpf_channel *chan = nbpf_to_chan(dchan);
chan->slave_dst_addr = config->dst_addr;
chan->slave_dst_width = nbpf_xfer_size(chan->nbpf,
config->dst_addr_width, 1);
chan->slave_dst_burst = nbpf_xfer_size(chan->nbpf,
config->dst_addr_width,
config->dst_maxburst);
chan->slave_src_addr = config->src_addr;
chan->slave_src_width = nbpf_xfer_size(chan->nbpf,
config->src_addr_width, 1);
chan->slave_src_burst = nbpf_xfer_size(chan->nbpf,
config->src_addr_width,
config->src_maxburst);
break;
dev_dbg(dchan->device->dev, "Entry %s\n", __func__);
dev_dbg(dchan->device->dev, "Terminating\n");
case DMA_PAUSE:
chan->paused = true;
nbpf_pause(chan);
break;
nbpf_chan_halt(chan);
nbpf_chan_idle(chan);
default:
return -ENXIO;
}
return 0;
}
static int nbpf_config(struct dma_chan *dchan,
struct dma_slave_config *config)
{
struct nbpf_channel *chan = nbpf_to_chan(dchan);
dev_dbg(dchan->device->dev, "Entry %s\n", __func__);
/*
* We could check config->slave_id to match chan->terminal here,
* but with DT they would be coming from the same source, so
* such a check would be superflous
*/
chan->slave_dst_addr = config->dst_addr;
chan->slave_dst_width = nbpf_xfer_size(chan->nbpf,
config->dst_addr_width, 1);
chan->slave_dst_burst = nbpf_xfer_size(chan->nbpf,
config->dst_addr_width,
config->dst_maxburst);
chan->slave_src_addr = config->src_addr;
chan->slave_src_width = nbpf_xfer_size(chan->nbpf,
config->src_addr_width, 1);
chan->slave_src_burst = nbpf_xfer_size(chan->nbpf,
config->src_addr_width,
config->src_maxburst);
return 0;
}
......@@ -1072,18 +1069,6 @@ static void nbpf_free_chan_resources(struct dma_chan *dchan)
}
}
static int nbpf_slave_caps(struct dma_chan *dchan,
struct dma_slave_caps *caps)
{
caps->src_addr_widths = NBPF_DMA_BUSWIDTHS;
caps->dstn_addr_widths = NBPF_DMA_BUSWIDTHS;
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = false;
caps->cmd_terminate = true;
return 0;
}
static struct dma_chan *nbpf_of_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
......@@ -1414,7 +1399,6 @@ static int nbpf_probe(struct platform_device *pdev)
dma_dev->device_prep_dma_memcpy = nbpf_prep_memcpy;
dma_dev->device_tx_status = nbpf_tx_status;
dma_dev->device_issue_pending = nbpf_issue_pending;
dma_dev->device_slave_caps = nbpf_slave_caps;
/*
* If we drop support for unaligned MEMCPY buffer addresses and / or
......@@ -1426,7 +1410,13 @@ static int nbpf_probe(struct platform_device *pdev)
/* Compulsory for DMA_SLAVE fields */
dma_dev->device_prep_slave_sg = nbpf_prep_slave_sg;
dma_dev->device_control = nbpf_control;
dma_dev->device_config = nbpf_config;
dma_dev->device_pause = nbpf_pause;
dma_dev->device_terminate_all = nbpf_terminate_all;
dma_dev->src_addr_widths = NBPF_DMA_BUSWIDTHS;
dma_dev->dst_addr_widths = NBPF_DMA_BUSWIDTHS;
dma_dev->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
platform_set_drvdata(pdev, nbpf);
......
......@@ -159,6 +159,10 @@ struct dma_chan *of_dma_request_slave_channel(struct device_node *np,
return ERR_PTR(-ENODEV);
}
/* Silently fail if there is not even the "dmas" property */
if (!of_find_property(np, "dmas", NULL))
return ERR_PTR(-ENODEV);
count = of_property_count_strings(np, "dma-names");
if (count < 0) {
pr_err("%s: dma-names property of node '%s' missing or empty\n",
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -16,3 +16,4 @@ obj-$(CONFIG_SH_DMAE) += shdma.o
obj-$(CONFIG_SUDMAC) += sudmac.o
obj-$(CONFIG_RCAR_HPB_DMAE) += rcar-hpbdma.o
obj-$(CONFIG_RCAR_AUDMAC_PP) += rcar-audmapp.o
obj-$(CONFIG_RCAR_DMAC) += rcar-dmac.o
此差异已折叠。
......@@ -534,6 +534,8 @@ static int hpb_dmae_chan_probe(struct hpb_dmae_device *hpbdev, int id)
static int hpb_dmae_probe(struct platform_device *pdev)
{
const enum dma_slave_buswidth widths = DMA_SLAVE_BUSWIDTH_1_BYTE |
DMA_SLAVE_BUSWIDTH_2_BYTES | DMA_SLAVE_BUSWIDTH_4_BYTES;
struct hpb_dmae_pdata *pdata = pdev->dev.platform_data;
struct hpb_dmae_device *hpbdev;
struct dma_device *dma_dev;
......@@ -595,6 +597,10 @@ static int hpb_dmae_probe(struct platform_device *pdev)
dma_cap_set(DMA_MEMCPY, dma_dev->cap_mask);
dma_cap_set(DMA_SLAVE, dma_dev->cap_mask);
dma_dev->src_addr_widths = widths;
dma_dev->dst_addr_widths = widths;
dma_dev->directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM);
dma_dev->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
hpbdev->shdma_dev.ops = &hpb_dmae_ops;
hpbdev->shdma_dev.desc_size = sizeof(struct hpb_desc);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册
反馈
建议
客服 返回
顶部