Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
openeuler
raspberrypi-kernel
提交
2cd6f792
R
raspberrypi-kernel
项目概览
openeuler
/
raspberrypi-kernel
通知
13
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
R
raspberrypi-kernel
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
2cd6f792
编写于
2月 02, 2015
作者:
V
Vinod Koul
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'topic/slave_caps_device_control_fix_rebased' into for-linus
上级
c914570f
5cf5aec5
变更
50
展开全部
隐藏空白更改
内联
并排
Showing
50 changed file
with
1647 addition
and
1829 deletion
+1647
-1829
Documentation/dmaengine/provider.txt
Documentation/dmaengine/provider.txt
+55
-42
drivers/crypto/ux500/cryp/cryp_core.c
drivers/crypto/ux500/cryp/cryp_core.c
+2
-2
drivers/crypto/ux500/hash/hash_core.c
drivers/crypto/ux500/hash/hash_core.c
+1
-1
drivers/dma/amba-pl08x.c
drivers/dma/amba-pl08x.c
+90
-66
drivers/dma/at_hdmac.c
drivers/dma/at_hdmac.c
+82
-48
drivers/dma/at_hdmac_regs.h
drivers/dma/at_hdmac_regs.h
+2
-1
drivers/dma/at_xdmac.c
drivers/dma/at_xdmac.c
+68
-59
drivers/dma/bcm2835-dma.c
drivers/dma/bcm2835-dma.c
+11
-35
drivers/dma/coh901318.c
drivers/dma/coh901318.c
+70
-83
drivers/dma/cppi41.c
drivers/dma/cppi41.c
+1
-29
drivers/dma/dma-jz4740.c
drivers/dma/dma-jz4740.c
+3
-17
drivers/dma/dmaengine.c
drivers/dma/dmaengine.c
+62
-22
drivers/dma/dw/core.c
drivers/dma/dw/core.c
+59
-39
drivers/dma/dw/regs.h
drivers/dma/dw/regs.h
+1
-1
drivers/dma/edma.c
drivers/dma/edma.c
+21
-49
drivers/dma/ep93xx_dma.c
drivers/dma/ep93xx_dma.c
+8
-35
drivers/dma/fsl-edma.c
drivers/dma/fsl-edma.c
+62
-61
drivers/dma/fsldma.c
drivers/dma/fsldma.c
+37
-60
drivers/dma/fsldma.h
drivers/dma/fsldma.h
+4
-0
drivers/dma/imx-dma.c
drivers/dma/imx-dma.c
+51
-52
drivers/dma/imx-sdma.c
drivers/dma/imx-sdma.c
+50
-82
drivers/dma/intel_mid_dma.c
drivers/dma/intel_mid_dma.c
+6
-19
drivers/dma/ipu/ipu_idmac.c
drivers/dma/ipu/ipu_idmac.c
+51
-45
drivers/dma/k3dma.c
drivers/dma/k3dma.c
+110
-93
drivers/dma/mmp_pdma.c
drivers/dma/mmp_pdma.c
+56
-53
drivers/dma/mmp_tdma.c
drivers/dma/mmp_tdma.c
+46
-39
drivers/dma/moxart-dma.c
drivers/dma/moxart-dma.c
+2
-23
drivers/dma/mpc512x_dma.c
drivers/dma/mpc512x_dma.c
+51
-60
drivers/dma/mv_xor.c
drivers/dma/mv_xor.c
+0
-9
drivers/dma/mxs-dma.c
drivers/dma/mxs-dma.c
+28
-37
drivers/dma/nbpfaxi.c
drivers/dma/nbpfaxi.c
+51
-61
drivers/dma/omap-dma.c
drivers/dma/omap-dma.c
+19
-50
drivers/dma/pch_dma.c
drivers/dma/pch_dma.c
+2
-6
drivers/dma/pl330.c
drivers/dma/pl330.c
+55
-79
drivers/dma/qcom_bam_dma.c
drivers/dma/qcom_bam_dma.c
+43
-42
drivers/dma/s3c24xx-dma.c
drivers/dma/s3c24xx-dma.c
+36
-39
drivers/dma/sa11x0-dma.c
drivers/dma/sa11x0-dma.c
+82
-75
drivers/dma/sh/rcar-hpbdma.c
drivers/dma/sh/rcar-hpbdma.c
+6
-0
drivers/dma/sh/shdma-base.c
drivers/dma/sh/shdma-base.c
+33
-39
drivers/dma/sh/shdmac.c
drivers/dma/sh/shdmac.c
+9
-0
drivers/dma/sirf-dma.c
drivers/dma/sirf-dma.c
+16
-43
drivers/dma/ste_dma40.c
drivers/dma/ste_dma40.c
+30
-33
drivers/dma/sun6i-dma.c
drivers/dma/sun6i-dma.c
+87
-73
drivers/dma/tegra20-apb-dma.c
drivers/dma/tegra20-apb-dma.c
+20
-22
drivers/dma/timb_dma.c
drivers/dma/timb_dma.c
+2
-6
drivers/dma/txx9dmac.c
drivers/dma/txx9dmac.c
+2
-7
drivers/dma/xilinx/xilinx_vdma.c
drivers/dma/xilinx/xilinx_vdma.c
+6
-23
drivers/rapidio/devices/tsi721_dma.c
drivers/rapidio/devices/tsi721_dma.c
+2
-6
include/linux/dmaengine.h
include/linux/dmaengine.h
+55
-62
sound/soc/soc-generic-dmaengine-pcm.c
sound/soc/soc-generic-dmaengine-pcm.c
+1
-1
未找到文件。
Documentation/dmaengine/provider.txt
浏览文件 @
2cd6f792
...
...
@@ -113,6 +113,31 @@ need to initialize a few fields in there:
* channels: should be initialized as a list using the
INIT_LIST_HEAD macro for example
* src_addr_widths:
- should contain a bitmask of the supported source transfer width
* dst_addr_widths:
- should contain a bitmask of the supported destination transfer
width
* directions:
- should contain a bitmask of the supported slave directions
(i.e. excluding mem2mem transfers)
* residue_granularity:
- Granularity of the transfer residue reported to dma_set_residue.
- This can be either:
+ Descriptor
-> Your device doesn't support any kind of residue
reporting. The framework will only know that a particular
transaction descriptor is done.
+ Segment
-> Your device is able to report which chunks have been
transferred
+ Burst
-> Your device is able to report which burst have been
transferred
* dev: should hold the pointer to the struct device associated
to your current driver instance.
...
...
@@ -274,48 +299,36 @@ supported.
account the current period.
- This function can be called in an interrupt context.
* device_control
- Used by client drivers to control and configure the channel it
has a handle on.
- Called with a command and an argument
+ The command is one of the values listed by the enum
dma_ctrl_cmd. The valid commands are:
+ DMA_PAUSE
+ Pauses a transfer on the channel
+ This command should operate synchronously on the channel,
pausing right away the work of the given channel
+ DMA_RESUME
+ Restarts a transfer on the channel
+ This command should operate synchronously on the channel,
resuming right away the work of the given channel
+ DMA_TERMINATE_ALL
+ Aborts all the pending and ongoing transfers on the
channel
+ This command should operate synchronously on the channel,
terminating right away all the channels
+ DMA_SLAVE_CONFIG
+ Reconfigures the channel with passed configuration
+ This command should NOT perform synchronously, or on any
currently queued transfers, but only on subsequent ones
+ In this case, the function will receive a
dma_slave_config structure pointer as an argument, that
will detail which configuration to use.
+ Even though that structure contains a direction field,
this field is deprecated in favor of the direction
argument given to the prep_* functions
+ FSLDMA_EXTERNAL_START
+ TODO: Why does that even exist?
+ The argument is an opaque unsigned long. This actually is a
pointer to a struct dma_slave_config that should be used only
in the DMA_SLAVE_CONFIG.
* device_slave_caps
- Called through the framework by client drivers in order to have
an idea of what are the properties of the channel allocated to
them.
- Such properties are the buswidth, available directions, etc.
- Required for every generic layer doing DMA transfers, such as
ASoC.
* device_config
- Reconfigures the channel with the configuration given as
argument
- This command should NOT perform synchronously, or on any
currently queued transfers, but only on subsequent ones
- In this case, the function will receive a dma_slave_config
structure pointer as an argument, that will detail which
configuration to use.
- Even though that structure contains a direction field, this
field is deprecated in favor of the direction argument given to
the prep_* functions
- This call is mandatory for slave operations only. This should NOT be
set or expected to be set for memcpy operations.
If a driver support both, it should use this call for slave
operations only and not for memcpy ones.
* device_pause
- Pauses a transfer on the channel
- This command should operate synchronously on the channel,
pausing right away the work of the given channel
* device_resume
- Resumes a transfer on the channel
- This command should operate synchronously on the channel,
pausing right away the work of the given channel
* device_terminate_all
- Aborts all the pending and ongoing transfers on the channel
- This command should operate synchronously on the channel,
terminating right away all the channels
Misc notes (stuff that should be documented, but don't really know
where to put them)
...
...
drivers/crypto/ux500/cryp/cryp_core.c
浏览文件 @
2cd6f792
...
...
@@ -606,12 +606,12 @@ static void cryp_dma_done(struct cryp_ctx *ctx)
dev_dbg
(
ctx
->
device
->
dev
,
"[%s]: "
,
__func__
);
chan
=
ctx
->
device
->
dma
.
chan_mem2cryp
;
dmaengine_
device_control
(
chan
,
DMA_TERMINATE_ALL
,
0
);
dmaengine_
terminate_all
(
chan
);
dma_unmap_sg
(
chan
->
device
->
dev
,
ctx
->
device
->
dma
.
sg_src
,
ctx
->
device
->
dma
.
sg_src_len
,
DMA_TO_DEVICE
);
chan
=
ctx
->
device
->
dma
.
chan_cryp2mem
;
dmaengine_
device_control
(
chan
,
DMA_TERMINATE_ALL
,
0
);
dmaengine_
terminate_all
(
chan
);
dma_unmap_sg
(
chan
->
device
->
dev
,
ctx
->
device
->
dma
.
sg_dst
,
ctx
->
device
->
dma
.
sg_dst_len
,
DMA_FROM_DEVICE
);
}
...
...
drivers/crypto/ux500/hash/hash_core.c
浏览文件 @
2cd6f792
...
...
@@ -202,7 +202,7 @@ static void hash_dma_done(struct hash_ctx *ctx)
struct
dma_chan
*
chan
;
chan
=
ctx
->
device
->
dma
.
chan_mem2hash
;
dmaengine_
device_control
(
chan
,
DMA_TERMINATE_ALL
,
0
);
dmaengine_
terminate_all
(
chan
);
dma_unmap_sg
(
chan
->
device
->
dev
,
ctx
->
device
->
dma
.
sg
,
ctx
->
device
->
dma
.
sg_len
,
DMA_TO_DEVICE
);
}
...
...
drivers/dma/amba-pl08x.c
浏览文件 @
2cd6f792
...
...
@@ -1386,32 +1386,6 @@ static u32 pl08x_get_cctl(struct pl08x_dma_chan *plchan,
return
pl08x_cctl
(
cctl
);
}
static
int
dma_set_runtime_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
config
)
{
struct
pl08x_dma_chan
*
plchan
=
to_pl08x_chan
(
chan
);
struct
pl08x_driver_data
*
pl08x
=
plchan
->
host
;
if
(
!
plchan
->
slave
)
return
-
EINVAL
;
/* Reject definitely invalid configurations */
if
(
config
->
src_addr_width
==
DMA_SLAVE_BUSWIDTH_8_BYTES
||
config
->
dst_addr_width
==
DMA_SLAVE_BUSWIDTH_8_BYTES
)
return
-
EINVAL
;
if
(
config
->
device_fc
&&
pl08x
->
vd
->
pl080s
)
{
dev_err
(
&
pl08x
->
adev
->
dev
,
"%s: PL080S does not support peripheral flow control
\n
"
,
__func__
);
return
-
EINVAL
;
}
plchan
->
cfg
=
*
config
;
return
0
;
}
/*
* Slave transactions callback to the slave device to allow
* synchronization of slave DMA signals with the DMAC enable
...
...
@@ -1693,20 +1667,71 @@ static struct dma_async_tx_descriptor *pl08x_prep_dma_cyclic(
return
vchan_tx_prep
(
&
plchan
->
vc
,
&
txd
->
vd
,
flags
);
}
static
int
pl08x_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
pl08x_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
config
)
{
struct
pl08x_dma_chan
*
plchan
=
to_pl08x_chan
(
chan
);
struct
pl08x_driver_data
*
pl08x
=
plchan
->
host
;
if
(
!
plchan
->
slave
)
return
-
EINVAL
;
/* Reject definitely invalid configurations */
if
(
config
->
src_addr_width
==
DMA_SLAVE_BUSWIDTH_8_BYTES
||
config
->
dst_addr_width
==
DMA_SLAVE_BUSWIDTH_8_BYTES
)
return
-
EINVAL
;
if
(
config
->
device_fc
&&
pl08x
->
vd
->
pl080s
)
{
dev_err
(
&
pl08x
->
adev
->
dev
,
"%s: PL080S does not support peripheral flow control
\n
"
,
__func__
);
return
-
EINVAL
;
}
plchan
->
cfg
=
*
config
;
return
0
;
}
static
int
pl08x_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
pl08x_dma_chan
*
plchan
=
to_pl08x_chan
(
chan
);
struct
pl08x_driver_data
*
pl08x
=
plchan
->
host
;
unsigned
long
flags
;
int
ret
=
0
;
/* Controls applicable to inactive channels */
if
(
cmd
==
DMA_SLAVE_CONFIG
)
{
return
dma_set_runtime_config
(
chan
,
(
struct
dma_slave_config
*
)
arg
)
;
spin_lock_irqsave
(
&
plchan
->
vc
.
lock
,
flags
);
if
(
!
plchan
->
phychan
&&
!
plchan
->
at
)
{
spin_unlock_irqrestore
(
&
plchan
->
vc
.
lock
,
flags
);
return
0
;
}
plchan
->
state
=
PL08X_CHAN_IDLE
;
if
(
plchan
->
phychan
)
{
/*
* Mark physical channel as free and free any slave
* signal
*/
pl08x_phy_free
(
plchan
);
}
/* Dequeue jobs and free LLIs */
if
(
plchan
->
at
)
{
pl08x_desc_free
(
&
plchan
->
at
->
vd
);
plchan
->
at
=
NULL
;
}
/* Dequeue jobs not yet fired as well */
pl08x_free_txd_list
(
pl08x
,
plchan
);
spin_unlock_irqrestore
(
&
plchan
->
vc
.
lock
,
flags
);
return
0
;
}
static
int
pl08x_pause
(
struct
dma_chan
*
chan
)
{
struct
pl08x_dma_chan
*
plchan
=
to_pl08x_chan
(
chan
);
unsigned
long
flags
;
/*
* Anything succeeds on channels with no physical allocation and
* no queued transfers.
...
...
@@ -1717,42 +1742,35 @@ static int pl08x_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
return
0
;
}
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
plchan
->
state
=
PL08X_CHAN_IDLE
;
pl08x_pause_phy_chan
(
plchan
->
phychan
);
plchan
->
state
=
PL08X_CHAN_PAUSED
;
if
(
plchan
->
phychan
)
{
/*
* Mark physical channel as free and free any slave
* signal
*/
pl08x_phy_free
(
plchan
);
}
/* Dequeue jobs and free LLIs */
if
(
plchan
->
at
)
{
pl08x_desc_free
(
&
plchan
->
at
->
vd
);
plchan
->
at
=
NULL
;
}
/* Dequeue jobs not yet fired as well */
pl08x_free_txd_list
(
pl08x
,
plchan
);
break
;
case
DMA_PAUSE
:
pl08x_pause_phy_chan
(
plchan
->
phychan
);
plchan
->
state
=
PL08X_CHAN_PAUSED
;
break
;
case
DMA_RESUME
:
pl08x_resume_phy_chan
(
plchan
->
phychan
);
plchan
->
state
=
PL08X_CHAN_RUNNING
;
break
;
default:
/* Unknown command */
ret
=
-
ENXIO
;
break
;
spin_unlock_irqrestore
(
&
plchan
->
vc
.
lock
,
flags
);
return
0
;
}
static
int
pl08x_resume
(
struct
dma_chan
*
chan
)
{
struct
pl08x_dma_chan
*
plchan
=
to_pl08x_chan
(
chan
);
unsigned
long
flags
;
/*
* Anything succeeds on channels with no physical allocation and
* no queued transfers.
*/
spin_lock_irqsave
(
&
plchan
->
vc
.
lock
,
flags
);
if
(
!
plchan
->
phychan
&&
!
plchan
->
at
)
{
spin_unlock_irqrestore
(
&
plchan
->
vc
.
lock
,
flags
);
return
0
;
}
pl08x_resume_phy_chan
(
plchan
->
phychan
);
plchan
->
state
=
PL08X_CHAN_RUNNING
;
spin_unlock_irqrestore
(
&
plchan
->
vc
.
lock
,
flags
);
return
ret
;
return
0
;
}
bool
pl08x_filter_id
(
struct
dma_chan
*
chan
,
void
*
chan_id
)
...
...
@@ -2048,7 +2066,10 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
pl08x
->
memcpy
.
device_prep_dma_interrupt
=
pl08x_prep_dma_interrupt
;
pl08x
->
memcpy
.
device_tx_status
=
pl08x_dma_tx_status
;
pl08x
->
memcpy
.
device_issue_pending
=
pl08x_issue_pending
;
pl08x
->
memcpy
.
device_control
=
pl08x_control
;
pl08x
->
memcpy
.
device_config
=
pl08x_config
;
pl08x
->
memcpy
.
device_pause
=
pl08x_pause
;
pl08x
->
memcpy
.
device_resume
=
pl08x_resume
;
pl08x
->
memcpy
.
device_terminate_all
=
pl08x_terminate_all
;
/* Initialize slave engine */
dma_cap_set
(
DMA_SLAVE
,
pl08x
->
slave
.
cap_mask
);
...
...
@@ -2061,7 +2082,10 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
pl08x
->
slave
.
device_issue_pending
=
pl08x_issue_pending
;
pl08x
->
slave
.
device_prep_slave_sg
=
pl08x_prep_slave_sg
;
pl08x
->
slave
.
device_prep_dma_cyclic
=
pl08x_prep_dma_cyclic
;
pl08x
->
slave
.
device_control
=
pl08x_control
;
pl08x
->
slave
.
device_config
=
pl08x_config
;
pl08x
->
slave
.
device_pause
=
pl08x_pause
;
pl08x
->
slave
.
device_resume
=
pl08x_resume
;
pl08x
->
slave
.
device_terminate_all
=
pl08x_terminate_all
;
/* Get the platform data */
pl08x
->
pd
=
dev_get_platdata
(
&
adev
->
dev
);
...
...
drivers/dma/at_hdmac.c
浏览文件 @
2cd6f792
...
...
@@ -42,6 +42,11 @@
#define ATC_DEFAULT_CFG (ATC_FIFOCFG_HALFFIFO)
#define ATC_DEFAULT_CTRLB (ATC_SIF(AT_DMA_MEM_IF) \
|ATC_DIF(AT_DMA_MEM_IF))
#define ATC_DMA_BUSWIDTHS\
(BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) |\
BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |\
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |\
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES))
/*
* Initial number of descriptors to allocate for each channel. This could
...
...
@@ -972,11 +977,13 @@ atc_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
return
NULL
;
}
static
int
set_runtime
_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
sconfig
)
static
int
atc
_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
sconfig
)
{
struct
at_dma_chan
*
atchan
=
to_at_dma_chan
(
chan
);
dev_vdbg
(
chan2dev
(
chan
),
"%s
\n
"
,
__func__
);
/* Check if it is chan is configured for slave transfers */
if
(
!
chan
->
private
)
return
-
EINVAL
;
...
...
@@ -989,9 +996,28 @@ static int set_runtime_config(struct dma_chan *chan,
return
0
;
}
static
int
atc_pause
(
struct
dma_chan
*
chan
)
{
struct
at_dma_chan
*
atchan
=
to_at_dma_chan
(
chan
);
struct
at_dma
*
atdma
=
to_at_dma
(
chan
->
device
);
int
chan_id
=
atchan
->
chan_common
.
chan_id
;
unsigned
long
flags
;
static
int
atc_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
LIST_HEAD
(
list
);
dev_vdbg
(
chan2dev
(
chan
),
"%s
\n
"
,
__func__
);
spin_lock_irqsave
(
&
atchan
->
lock
,
flags
);
dma_writel
(
atdma
,
CHER
,
AT_DMA_SUSP
(
chan_id
));
set_bit
(
ATC_IS_PAUSED
,
&
atchan
->
status
);
spin_unlock_irqrestore
(
&
atchan
->
lock
,
flags
);
return
0
;
}
static
int
atc_resume
(
struct
dma_chan
*
chan
)
{
struct
at_dma_chan
*
atchan
=
to_at_dma_chan
(
chan
);
struct
at_dma
*
atdma
=
to_at_dma
(
chan
->
device
);
...
...
@@ -1000,60 +1026,61 @@ static int atc_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
LIST_HEAD
(
list
);
dev_vdbg
(
chan2dev
(
chan
),
"
atc_control (%d)
\n
"
,
cmd
);
dev_vdbg
(
chan2dev
(
chan
),
"
%s
\n
"
,
__func__
);
if
(
cmd
==
DMA_PAUSE
)
{
spin_lock_irqsave
(
&
atchan
->
lock
,
flags
)
;
if
(
!
atc_chan_is_paused
(
atchan
))
return
0
;
dma_writel
(
atdma
,
CHER
,
AT_DMA_SUSP
(
chan_id
));
set_bit
(
ATC_IS_PAUSED
,
&
atchan
->
status
);
spin_lock_irqsave
(
&
atchan
->
lock
,
flags
);
spin_unlock_irqrestore
(
&
atchan
->
lock
,
flags
);
}
else
if
(
cmd
==
DMA_RESUME
)
{
if
(
!
atc_chan_is_paused
(
atchan
))
return
0
;
dma_writel
(
atdma
,
CHDR
,
AT_DMA_RES
(
chan_id
));
clear_bit
(
ATC_IS_PAUSED
,
&
atchan
->
status
);
spin_lock_irqsav
e
(
&
atchan
->
lock
,
flags
);
spin_unlock_irqrestor
e
(
&
atchan
->
lock
,
flags
);
dma_writel
(
atdma
,
CHDR
,
AT_DMA_RES
(
chan_id
))
;
clear_bit
(
ATC_IS_PAUSED
,
&
atchan
->
status
);
return
0
;
}
spin_unlock_irqrestore
(
&
atchan
->
lock
,
flags
);
}
else
if
(
cmd
==
DMA_TERMINATE_ALL
)
{
struct
at_desc
*
desc
,
*
_desc
;
/*
* This is only called when something went wrong elsewhere, so
* we don't really care about the data. Just disable the
* channel. We still have to poll the channel enable bit due
* to AHB/HSB limitations.
*/
spin_lock_irqsave
(
&
atchan
->
lock
,
flags
);
static
int
atc_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
at_dma_chan
*
atchan
=
to_at_dma_chan
(
chan
);
struct
at_dma
*
atdma
=
to_at_dma
(
chan
->
device
);
int
chan_id
=
atchan
->
chan_common
.
chan_id
;
struct
at_desc
*
desc
,
*
_desc
;
unsigned
long
flags
;
/* disabling channel: must also remove suspend state */
dma_writel
(
atdma
,
CHDR
,
AT_DMA_RES
(
chan_id
)
|
atchan
->
mask
);
LIST_HEAD
(
list
);
/* confirm that this channel is disabled */
while
(
dma_readl
(
atdma
,
CHSR
)
&
atchan
->
mask
)
cpu_relax
();
dev_vdbg
(
chan2dev
(
chan
),
"%s
\n
"
,
__func__
);
/* active_list entries will end up before queued entries */
list_splice_init
(
&
atchan
->
queue
,
&
list
);
list_splice_init
(
&
atchan
->
active_list
,
&
list
);
/*
* This is only called when something went wrong elsewhere, so
* we don't really care about the data. Just disable the
* channel. We still have to poll the channel enable bit due
* to AHB/HSB limitations.
*/
spin_lock_irqsave
(
&
atchan
->
lock
,
flags
);
/* Flush all pending and queued descriptors */
list_for_each_entry_safe
(
desc
,
_desc
,
&
list
,
desc_node
)
atc_chain_complete
(
atchan
,
desc
);
/* disabling channel: must also remove suspend state */
dma_writel
(
atdma
,
CHDR
,
AT_DMA_RES
(
chan_id
)
|
atchan
->
mask
);
clear_bit
(
ATC_IS_PAUSED
,
&
atchan
->
status
);
/* if channel dedicated to cyclic operations, free it */
c
lear_bit
(
ATC_IS_CYCLIC
,
&
atchan
->
status
);
/* confirm that this channel is disabled */
while
(
dma_readl
(
atdma
,
CHSR
)
&
atchan
->
mask
)
c
pu_relax
(
);
spin_unlock_irqrestore
(
&
atchan
->
lock
,
flags
);
}
else
if
(
cmd
==
DMA_SLAVE_CONFIG
)
{
return
set_runtime_config
(
chan
,
(
struct
dma_slave_config
*
)
arg
);
}
else
{
return
-
ENXIO
;
}
/* active_list entries will end up before queued entries */
list_splice_init
(
&
atchan
->
queue
,
&
list
);
list_splice_init
(
&
atchan
->
active_list
,
&
list
);
/* Flush all pending and queued descriptors */
list_for_each_entry_safe
(
desc
,
_desc
,
&
list
,
desc_node
)
atc_chain_complete
(
atchan
,
desc
);
clear_bit
(
ATC_IS_PAUSED
,
&
atchan
->
status
);
/* if channel dedicated to cyclic operations, free it */
clear_bit
(
ATC_IS_CYCLIC
,
&
atchan
->
status
);
spin_unlock_irqrestore
(
&
atchan
->
lock
,
flags
);
return
0
;
}
...
...
@@ -1505,7 +1532,14 @@ static int __init at_dma_probe(struct platform_device *pdev)
/* controller can do slave DMA: can trigger cyclic transfers */
dma_cap_set
(
DMA_CYCLIC
,
atdma
->
dma_common
.
cap_mask
);
atdma
->
dma_common
.
device_prep_dma_cyclic
=
atc_prep_dma_cyclic
;
atdma
->
dma_common
.
device_control
=
atc_control
;
atdma
->
dma_common
.
device_config
=
atc_config
;
atdma
->
dma_common
.
device_pause
=
atc_pause
;
atdma
->
dma_common
.
device_resume
=
atc_resume
;
atdma
->
dma_common
.
device_terminate_all
=
atc_terminate_all
;
atdma
->
dma_common
.
src_addr_widths
=
ATC_DMA_BUSWIDTHS
;
atdma
->
dma_common
.
dst_addr_widths
=
ATC_DMA_BUSWIDTHS
;
atdma
->
dma_common
.
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
atdma
->
dma_common
.
residue_granularity
=
DMA_RESIDUE_GRANULARITY_BURST
;
}
dma_writel
(
atdma
,
EN
,
AT_DMA_ENABLE
);
...
...
@@ -1622,7 +1656,7 @@ static void atc_suspend_cyclic(struct at_dma_chan *atchan)
if
(
!
atc_chan_is_paused
(
atchan
))
{
dev_warn
(
chan2dev
(
chan
),
"cyclic channel not paused, should be done by channel user
\n
"
);
atc_
control
(
chan
,
DMA_PAUSE
,
0
);
atc_
pause
(
chan
);
}
/* now preserve additional data for cyclic operations */
...
...
drivers/dma/at_hdmac_regs.h
浏览文件 @
2cd6f792
...
...
@@ -232,7 +232,8 @@ enum atc_status {
* @save_dscr: for cyclic operations, preserve next descriptor address in
* the cyclic list on suspend/resume cycle
* @remain_desc: to save remain desc length
* @dma_sconfig: configuration for slave transfers, passed via DMA_SLAVE_CONFIG
* @dma_sconfig: configuration for slave transfers, passed via
* .device_config
* @lock: serializes enqueue/dequeue operations to descriptors lists
* @active_list: list of descriptors dmaengine is being running on
* @queue: list of descriptors ready to be submitted to engine
...
...
drivers/dma/at_xdmac.c
浏览文件 @
2cd6f792
...
...
@@ -174,6 +174,13 @@
#define AT_XDMAC_MAX_CHAN 0x20
#define AT_XDMAC_DMA_BUSWIDTHS\
(BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) |\
BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |\
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |\
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) |\
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES))
enum
atc_status
{
AT_XDMAC_CHAN_IS_CYCLIC
=
0
,
AT_XDMAC_CHAN_IS_PAUSED
,
...
...
@@ -1107,58 +1114,75 @@ static void at_xdmac_issue_pending(struct dma_chan *chan)
return
;
}
static
int
at_xdmac_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
at_xdmac_device_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
config
)
{
struct
at_xdmac_chan
*
atchan
=
to_at_xdmac_chan
(
chan
);
int
ret
;
dev_dbg
(
chan2dev
(
chan
),
"%s
\n
"
,
__func__
);
spin_lock_bh
(
&
atchan
->
lock
);
ret
=
at_xdmac_set_slave_config
(
chan
,
config
);
spin_unlock_bh
(
&
atchan
->
lock
);
return
ret
;
}
static
int
at_xdmac_device_pause
(
struct
dma_chan
*
chan
)
{
struct
at_xdmac_desc
*
desc
,
*
_desc
;
struct
at_xdmac_chan
*
atchan
=
to_at_xdmac_chan
(
chan
);
struct
at_xdmac
*
atxdmac
=
to_at_xdmac
(
atchan
->
chan
.
device
);
int
ret
=
0
;
dev_dbg
(
chan2dev
(
chan
),
"%s
: cmd=%d
\n
"
,
__func__
,
cmd
);
dev_dbg
(
chan2dev
(
chan
),
"%s
\n
"
,
__func__
);
spin_lock_bh
(
&
atchan
->
lock
);
at_xdmac_write
(
atxdmac
,
AT_XDMAC_GRWS
,
atchan
->
mask
);
set_bit
(
AT_XDMAC_CHAN_IS_PAUSED
,
&
atchan
->
status
);
spin_unlock_bh
(
&
atchan
->
lock
);
return
0
;
}
switch
(
cmd
)
{
case
DMA_PAUSE
:
at_xdmac_write
(
atxdmac
,
AT_XDMAC_GRWS
,
atchan
->
mask
);
set_bit
(
AT_XDMAC_CHAN_IS_PAUSED
,
&
atchan
->
status
);
break
;
static
int
at_xdmac_device_resume
(
struct
dma_chan
*
chan
)
{
struct
at_xdmac_chan
*
atchan
=
to_at_xdmac_chan
(
chan
);
struct
at_xdmac
*
atxdmac
=
to_at_xdmac
(
atchan
->
chan
.
device
);
case
DMA_RESUME
:
if
(
!
at_xdmac_chan_is_paused
(
atchan
))
break
;
dev_dbg
(
chan2dev
(
chan
),
"%s
\n
"
,
__func__
);
at_xdmac_write
(
atxdmac
,
AT_XDMAC_GRWR
,
atchan
->
mas
k
);
clear_bit
(
AT_XDMAC_CHAN_IS_PAUSED
,
&
atchan
->
status
);
break
;
spin_lock_bh
(
&
atchan
->
loc
k
);
if
(
!
at_xdmac_chan_is_paused
(
atchan
))
return
0
;
case
DMA_TERMINATE_ALL
:
at_xdmac_write
(
atxdmac
,
AT_XDMAC_GD
,
atchan
->
mask
);
while
(
at_xdmac_read
(
atxdmac
,
AT_XDMAC_GS
)
&
atchan
->
mask
)
cpu_relax
();
at_xdmac_write
(
atxdmac
,
AT_XDMAC_GRWR
,
atchan
->
mask
);
clear_bit
(
AT_XDMAC_CHAN_IS_PAUSED
,
&
atchan
->
status
);
spin_unlock_bh
(
&
atchan
->
lock
);
return
0
;
}
/* Cancel all pending transfers. */
list_for_each_entry_safe
(
desc
,
_desc
,
&
atchan
->
xfers_list
,
xfer_node
)
at_xdmac_remove_xfer
(
atchan
,
desc
);
static
int
at_xdmac_device_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
at_xdmac_desc
*
desc
,
*
_desc
;
struct
at_xdmac_chan
*
atchan
=
to_at_xdmac_chan
(
chan
);
struct
at_xdmac
*
atxdmac
=
to_at_xdmac
(
atchan
->
chan
.
device
);
clear_bit
(
AT_XDMAC_CHAN_IS_CYCLIC
,
&
atchan
->
status
);
break
;
dev_dbg
(
chan2dev
(
chan
),
"%s
\n
"
,
__func__
);
case
DMA_SLAVE_CONFIG
:
ret
=
at_xdmac_set_slave_config
(
chan
,
(
struct
dma_slave_config
*
)
arg
);
break
;
spin_lock_bh
(
&
atchan
->
lock
);
at_xdmac_write
(
atxdmac
,
AT_XDMAC_GD
,
atchan
->
mask
);
while
(
at_xdmac_read
(
atxdmac
,
AT_XDMAC_GS
)
&
atchan
->
mask
)
cpu_relax
()
;
default:
dev_err
(
chan2dev
(
chan
),
"unmanaged or unknown dma control cmd: %d
\n
"
,
cmd
);
ret
=
-
ENXIO
;
}
/* Cancel all pending transfers. */
list_for_each_entry_safe
(
desc
,
_desc
,
&
atchan
->
xfers_list
,
xfer_node
)
at_xdmac_remove_xfer
(
atchan
,
desc
);
clear_bit
(
AT_XDMAC_CHAN_IS_CYCLIC
,
&
atchan
->
status
);
spin_unlock_bh
(
&
atchan
->
lock
);
return
ret
;
return
0
;
}
static
int
at_xdmac_alloc_chan_resources
(
struct
dma_chan
*
chan
)
...
...
@@ -1217,27 +1241,6 @@ static void at_xdmac_free_chan_resources(struct dma_chan *chan)
return
;
}
#define AT_XDMAC_DMA_BUSWIDTHS\
(BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) |\
BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |\
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |\
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) |\
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES))
static
int
at_xdmac_device_slave_caps
(
struct
dma_chan
*
dchan
,
struct
dma_slave_caps
*
caps
)
{
caps
->
src_addr_widths
=
AT_XDMAC_DMA_BUSWIDTHS
;
caps
->
dstn_addr_widths
=
AT_XDMAC_DMA_BUSWIDTHS
;
caps
->
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
caps
->
cmd_pause
=
true
;
caps
->
cmd_terminate
=
true
;
caps
->
residue_granularity
=
DMA_RESIDUE_GRANULARITY_BURST
;
return
0
;
}
#ifdef CONFIG_PM
static
int
atmel_xdmac_prepare
(
struct
device
*
dev
)
{
...
...
@@ -1270,7 +1273,7 @@ static int atmel_xdmac_suspend(struct device *dev)
if
(
at_xdmac_chan_is_cyclic
(
atchan
))
{
if
(
!
at_xdmac_chan_is_paused
(
atchan
))
at_xdmac_
control
(
chan
,
DMA_PAUSE
,
0
);
at_xdmac_
device_pause
(
chan
);
atchan
->
save_cim
=
at_xdmac_chan_read
(
atchan
,
AT_XDMAC_CIM
);
atchan
->
save_cnda
=
at_xdmac_chan_read
(
atchan
,
AT_XDMAC_CNDA
);
atchan
->
save_cndc
=
at_xdmac_chan_read
(
atchan
,
AT_XDMAC_CNDC
);
...
...
@@ -1407,8 +1410,14 @@ static int at_xdmac_probe(struct platform_device *pdev)
atxdmac
->
dma
.
device_prep_dma_cyclic
=
at_xdmac_prep_dma_cyclic
;
atxdmac
->
dma
.
device_prep_dma_memcpy
=
at_xdmac_prep_dma_memcpy
;
atxdmac
->
dma
.
device_prep_slave_sg
=
at_xdmac_prep_slave_sg
;
atxdmac
->
dma
.
device_control
=
at_xdmac_control
;
atxdmac
->
dma
.
device_slave_caps
=
at_xdmac_device_slave_caps
;
atxdmac
->
dma
.
device_config
=
at_xdmac_device_config
;
atxdmac
->
dma
.
device_pause
=
at_xdmac_device_pause
;
atxdmac
->
dma
.
device_resume
=
at_xdmac_device_resume
;
atxdmac
->
dma
.
device_terminate_all
=
at_xdmac_device_terminate_all
;
atxdmac
->
dma
.
src_addr_widths
=
AT_XDMAC_DMA_BUSWIDTHS
;
atxdmac
->
dma
.
dst_addr_widths
=
AT_XDMAC_DMA_BUSWIDTHS
;
atxdmac
->
dma
.
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
atxdmac
->
dma
.
residue_granularity
=
DMA_RESIDUE_GRANULARITY_BURST
;
/* Disable all chans and interrupts. */
at_xdmac_off
(
atxdmac
);
...
...
drivers/dma/bcm2835-dma.c
浏览文件 @
2cd6f792
...
...
@@ -436,9 +436,11 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic(
return
vchan_tx_prep
(
&
c
->
vc
,
&
d
->
vd
,
flags
);
}
static
int
bcm2835_dma_slave_config
(
struct
bcm2835_chan
*
c
,
struct
dma_slave_config
*
cfg
)
static
int
bcm2835_dma_slave_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
cfg
)
{
struct
bcm2835_chan
*
c
=
to_bcm2835_dma_chan
(
chan
);
if
((
cfg
->
direction
==
DMA_DEV_TO_MEM
&&
cfg
->
src_addr_width
!=
DMA_SLAVE_BUSWIDTH_4_BYTES
)
||
(
cfg
->
direction
==
DMA_MEM_TO_DEV
&&
...
...
@@ -452,8 +454,9 @@ static int bcm2835_dma_slave_config(struct bcm2835_chan *c,
return
0
;
}
static
int
bcm2835_dma_terminate_all
(
struct
bcm2835_chan
*
c
)
static
int
bcm2835_dma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
bcm2835_chan
*
c
=
to_bcm2835_dma_chan
(
chan
);
struct
bcm2835_dmadev
*
d
=
to_bcm2835_dma_dev
(
c
->
vc
.
chan
.
device
);
unsigned
long
flags
;
int
timeout
=
10000
;
...
...
@@ -495,24 +498,6 @@ static int bcm2835_dma_terminate_all(struct bcm2835_chan *c)
return
0
;
}
static
int
bcm2835_dma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
{
struct
bcm2835_chan
*
c
=
to_bcm2835_dma_chan
(
chan
);
switch
(
cmd
)
{
case
DMA_SLAVE_CONFIG
:
return
bcm2835_dma_slave_config
(
c
,
(
struct
dma_slave_config
*
)
arg
);
case
DMA_TERMINATE_ALL
:
return
bcm2835_dma_terminate_all
(
c
);
default:
return
-
ENXIO
;
}
}
static
int
bcm2835_dma_chan_init
(
struct
bcm2835_dmadev
*
d
,
int
chan_id
,
int
irq
)
{
struct
bcm2835_chan
*
c
;
...
...
@@ -565,18 +550,6 @@ static struct dma_chan *bcm2835_dma_xlate(struct of_phandle_args *spec,
return
chan
;
}
static
int
bcm2835_dma_device_slave_caps
(
struct
dma_chan
*
dchan
,
struct
dma_slave_caps
*
caps
)
{
caps
->
src_addr_widths
=
BIT
(
DMA_SLAVE_BUSWIDTH_4_BYTES
);
caps
->
dstn_addr_widths
=
BIT
(
DMA_SLAVE_BUSWIDTH_4_BYTES
);
caps
->
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
caps
->
cmd_pause
=
false
;
caps
->
cmd_terminate
=
true
;
return
0
;
}
static
int
bcm2835_dma_probe
(
struct
platform_device
*
pdev
)
{
struct
bcm2835_dmadev
*
od
;
...
...
@@ -615,9 +588,12 @@ static int bcm2835_dma_probe(struct platform_device *pdev)
od
->
ddev
.
device_free_chan_resources
=
bcm2835_dma_free_chan_resources
;
od
->
ddev
.
device_tx_status
=
bcm2835_dma_tx_status
;
od
->
ddev
.
device_issue_pending
=
bcm2835_dma_issue_pending
;
od
->
ddev
.
device_slave_caps
=
bcm2835_dma_device_slave_caps
;
od
->
ddev
.
device_prep_dma_cyclic
=
bcm2835_dma_prep_dma_cyclic
;
od
->
ddev
.
device_control
=
bcm2835_dma_control
;
od
->
ddev
.
device_config
=
bcm2835_dma_slave_config
;
od
->
ddev
.
device_terminate_all
=
bcm2835_dma_terminate_all
;
od
->
ddev
.
src_addr_widths
=
BIT
(
DMA_SLAVE_BUSWIDTH_4_BYTES
);
od
->
ddev
.
dst_addr_widths
=
BIT
(
DMA_SLAVE_BUSWIDTH_4_BYTES
);
od
->
ddev
.
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
od
->
ddev
.
dev
=
&
pdev
->
dev
;
INIT_LIST_HEAD
(
&
od
->
ddev
.
channels
);
spin_lock_init
(
&
od
->
lock
);
...
...
drivers/dma/coh901318.c
浏览文件 @
2cd6f792
...
...
@@ -1690,7 +1690,7 @@ static u32 coh901318_get_bytes_left(struct dma_chan *chan)
* Pauses a transfer without losing data. Enables power save.
* Use this function in conjunction with coh901318_resume.
*/
static
void
coh901318_pause
(
struct
dma_chan
*
chan
)
static
int
coh901318_pause
(
struct
dma_chan
*
chan
)
{
u32
val
;
unsigned
long
flags
;
...
...
@@ -1730,12 +1730,13 @@ static void coh901318_pause(struct dma_chan *chan)
enable_powersave
(
cohc
);
spin_unlock_irqrestore
(
&
cohc
->
lock
,
flags
);
return
0
;
}
/* Resumes a transfer that has been stopped via 300_dma_stop(..).
Power save is handled.
*/
static
void
coh901318_resume
(
struct
dma_chan
*
chan
)
static
int
coh901318_resume
(
struct
dma_chan
*
chan
)
{
u32
val
;
unsigned
long
flags
;
...
...
@@ -1760,6 +1761,7 @@ static void coh901318_resume(struct dma_chan *chan)
}
spin_unlock_irqrestore
(
&
cohc
->
lock
,
flags
);
return
0
;
}
bool
coh901318_filter_id
(
struct
dma_chan
*
chan
,
void
*
chan_id
)
...
...
@@ -2114,6 +2116,57 @@ static irqreturn_t dma_irq_handler(int irq, void *dev_id)
return
IRQ_HANDLED
;
}
static
int
coh901318_terminate_all
(
struct
dma_chan
*
chan
)
{
unsigned
long
flags
;
struct
coh901318_chan
*
cohc
=
to_coh901318_chan
(
chan
);
struct
coh901318_desc
*
cohd
;
void
__iomem
*
virtbase
=
cohc
->
base
->
virtbase
;
/* The remainder of this function terminates the transfer */
coh901318_pause
(
chan
);
spin_lock_irqsave
(
&
cohc
->
lock
,
flags
);
/* Clear any pending BE or TC interrupt */
if
(
cohc
->
id
<
32
)
{
writel
(
1
<<
cohc
->
id
,
virtbase
+
COH901318_BE_INT_CLEAR1
);
writel
(
1
<<
cohc
->
id
,
virtbase
+
COH901318_TC_INT_CLEAR1
);
}
else
{
writel
(
1
<<
(
cohc
->
id
-
32
),
virtbase
+
COH901318_BE_INT_CLEAR2
);
writel
(
1
<<
(
cohc
->
id
-
32
),
virtbase
+
COH901318_TC_INT_CLEAR2
);
}
enable_powersave
(
cohc
);
while
((
cohd
=
coh901318_first_active_get
(
cohc
)))
{
/* release the lli allocation*/
coh901318_lli_free
(
&
cohc
->
base
->
pool
,
&
cohd
->
lli
);
/* return desc to free-list */
coh901318_desc_remove
(
cohd
);
coh901318_desc_free
(
cohc
,
cohd
);
}
while
((
cohd
=
coh901318_first_queued
(
cohc
)))
{
/* release the lli allocation*/
coh901318_lli_free
(
&
cohc
->
base
->
pool
,
&
cohd
->
lli
);
/* return desc to free-list */
coh901318_desc_remove
(
cohd
);
coh901318_desc_free
(
cohc
,
cohd
);
}
cohc
->
nbr_active_done
=
0
;
cohc
->
busy
=
0
;
spin_unlock_irqrestore
(
&
cohc
->
lock
,
flags
);
return
0
;
}
static
int
coh901318_alloc_chan_resources
(
struct
dma_chan
*
chan
)
{
struct
coh901318_chan
*
cohc
=
to_coh901318_chan
(
chan
);
...
...
@@ -2156,7 +2209,7 @@ coh901318_free_chan_resources(struct dma_chan *chan)
spin_unlock_irqrestore
(
&
cohc
->
lock
,
flags
);
dmaengine
_terminate_all
(
chan
);
coh901318
_terminate_all
(
chan
);
}
...
...
@@ -2461,8 +2514,8 @@ static const struct burst_table burst_sizes[] = {
},
};
static
void
coh901318_dma_set_runtimeconfig
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
config
)
static
int
coh901318_dma_set_runtimeconfig
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
config
)
{
struct
coh901318_chan
*
cohc
=
to_coh901318_chan
(
chan
);
dma_addr_t
addr
;
...
...
@@ -2482,7 +2535,7 @@ static void coh901318_dma_set_runtimeconfig(struct dma_chan *chan,
maxburst
=
config
->
dst_maxburst
;
}
else
{
dev_err
(
COHC_2_DEV
(
cohc
),
"illegal channel mode
\n
"
);
return
;
return
-
EINVAL
;
}
dev_dbg
(
COHC_2_DEV
(
cohc
),
"configure channel for %d byte transfers
\n
"
,
...
...
@@ -2528,7 +2581,7 @@ static void coh901318_dma_set_runtimeconfig(struct dma_chan *chan,
default:
dev_err
(
COHC_2_DEV
(
cohc
),
"bad runtimeconfig: alien address width
\n
"
);
return
;
return
-
EINVAL
;
}
ctrl
|=
burst_sizes
[
i
].
reg
;
...
...
@@ -2538,84 +2591,12 @@ static void coh901318_dma_set_runtimeconfig(struct dma_chan *chan,
cohc
->
addr
=
addr
;
cohc
->
ctrl
=
ctrl
;
}
static
int
coh901318_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
{
unsigned
long
flags
;
struct
coh901318_chan
*
cohc
=
to_coh901318_chan
(
chan
);
struct
coh901318_desc
*
cohd
;
void
__iomem
*
virtbase
=
cohc
->
base
->
virtbase
;
if
(
cmd
==
DMA_SLAVE_CONFIG
)
{
struct
dma_slave_config
*
config
=
(
struct
dma_slave_config
*
)
arg
;
coh901318_dma_set_runtimeconfig
(
chan
,
config
);
return
0
;
}
if
(
cmd
==
DMA_PAUSE
)
{
coh901318_pause
(
chan
);
return
0
;
}
if
(
cmd
==
DMA_RESUME
)
{
coh901318_resume
(
chan
);
return
0
;
}
if
(
cmd
!=
DMA_TERMINATE_ALL
)
return
-
ENXIO
;
/* The remainder of this function terminates the transfer */
coh901318_pause
(
chan
);
spin_lock_irqsave
(
&
cohc
->
lock
,
flags
);
/* Clear any pending BE or TC interrupt */
if
(
cohc
->
id
<
32
)
{
writel
(
1
<<
cohc
->
id
,
virtbase
+
COH901318_BE_INT_CLEAR1
);
writel
(
1
<<
cohc
->
id
,
virtbase
+
COH901318_TC_INT_CLEAR1
);
}
else
{
writel
(
1
<<
(
cohc
->
id
-
32
),
virtbase
+
COH901318_BE_INT_CLEAR2
);
writel
(
1
<<
(
cohc
->
id
-
32
),
virtbase
+
COH901318_TC_INT_CLEAR2
);
}
enable_powersave
(
cohc
);
while
((
cohd
=
coh901318_first_active_get
(
cohc
)))
{
/* release the lli allocation*/
coh901318_lli_free
(
&
cohc
->
base
->
pool
,
&
cohd
->
lli
);
/* return desc to free-list */
coh901318_desc_remove
(
cohd
);
coh901318_desc_free
(
cohc
,
cohd
);
}
while
((
cohd
=
coh901318_first_queued
(
cohc
)))
{
/* release the lli allocation*/
coh901318_lli_free
(
&
cohc
->
base
->
pool
,
&
cohd
->
lli
);
/* return desc to free-list */
coh901318_desc_remove
(
cohd
);
coh901318_desc_free
(
cohc
,
cohd
);
}
cohc
->
nbr_active_done
=
0
;
cohc
->
busy
=
0
;
spin_unlock_irqrestore
(
&
cohc
->
lock
,
flags
);
return
0
;
}
void
coh901318_base_init
(
struct
dma_device
*
dma
,
const
int
*
pick_chans
,
struct
coh901318_base
*
base
)
static
void
coh901318_base_init
(
struct
dma_device
*
dma
,
const
int
*
pick_chans
,
struct
coh901318_base
*
base
)
{
int
chans_i
;
int
i
=
0
;
...
...
@@ -2717,7 +2698,10 @@ static int __init coh901318_probe(struct platform_device *pdev)
base
->
dma_slave
.
device_prep_slave_sg
=
coh901318_prep_slave_sg
;
base
->
dma_slave
.
device_tx_status
=
coh901318_tx_status
;
base
->
dma_slave
.
device_issue_pending
=
coh901318_issue_pending
;
base
->
dma_slave
.
device_control
=
coh901318_control
;
base
->
dma_slave
.
device_config
=
coh901318_dma_set_runtimeconfig
;
base
->
dma_slave
.
device_pause
=
coh901318_pause
;
base
->
dma_slave
.
device_resume
=
coh901318_resume
;
base
->
dma_slave
.
device_terminate_all
=
coh901318_terminate_all
;
base
->
dma_slave
.
dev
=
&
pdev
->
dev
;
err
=
dma_async_device_register
(
&
base
->
dma_slave
);
...
...
@@ -2737,7 +2721,10 @@ static int __init coh901318_probe(struct platform_device *pdev)
base
->
dma_memcpy
.
device_prep_dma_memcpy
=
coh901318_prep_memcpy
;
base
->
dma_memcpy
.
device_tx_status
=
coh901318_tx_status
;
base
->
dma_memcpy
.
device_issue_pending
=
coh901318_issue_pending
;
base
->
dma_memcpy
.
device_control
=
coh901318_control
;
base
->
dma_memcpy
.
device_config
=
coh901318_dma_set_runtimeconfig
;
base
->
dma_memcpy
.
device_pause
=
coh901318_pause
;
base
->
dma_memcpy
.
device_resume
=
coh901318_resume
;
base
->
dma_memcpy
.
device_terminate_all
=
coh901318_terminate_all
;
base
->
dma_memcpy
.
dev
=
&
pdev
->
dev
;
/*
* This controller can only access address at even 32bit boundaries,
...
...
drivers/dma/cppi41.c
浏览文件 @
2cd6f792
...
...
@@ -525,12 +525,6 @@ static struct dma_async_tx_descriptor *cppi41_dma_prep_slave_sg(
return
&
c
->
txd
;
}
static
int
cpp41_cfg_chan
(
struct
cppi41_channel
*
c
,
struct
dma_slave_config
*
cfg
)
{
return
0
;
}
static
void
cppi41_compute_td_desc
(
struct
cppi41_desc
*
d
)
{
d
->
pd0
=
DESC_TYPE_TEARD
<<
DESC_TYPE
;
...
...
@@ -647,28 +641,6 @@ static int cppi41_stop_chan(struct dma_chan *chan)
return
0
;
}
static
int
cppi41_dma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
{
struct
cppi41_channel
*
c
=
to_cpp41_chan
(
chan
);
int
ret
;
switch
(
cmd
)
{
case
DMA_SLAVE_CONFIG
:
ret
=
cpp41_cfg_chan
(
c
,
(
struct
dma_slave_config
*
)
arg
);
break
;
case
DMA_TERMINATE_ALL
:
ret
=
cppi41_stop_chan
(
chan
);
break
;
default:
ret
=
-
ENXIO
;
break
;
}
return
ret
;
}
static
void
cleanup_chans
(
struct
cppi41_dd
*
cdd
)
{
while
(
!
list_empty
(
&
cdd
->
ddev
.
channels
))
{
...
...
@@ -953,7 +925,7 @@ static int cppi41_dma_probe(struct platform_device *pdev)
cdd
->
ddev
.
device_tx_status
=
cppi41_dma_tx_status
;
cdd
->
ddev
.
device_issue_pending
=
cppi41_dma_issue_pending
;
cdd
->
ddev
.
device_prep_slave_sg
=
cppi41_dma_prep_slave_sg
;
cdd
->
ddev
.
device_
control
=
cppi41_dma_control
;
cdd
->
ddev
.
device_
terminate_all
=
cppi41_stop_chan
;
cdd
->
ddev
.
dev
=
dev
;
INIT_LIST_HEAD
(
&
cdd
->
ddev
.
channels
);
cpp41_dma_info
.
dma_cap
=
cdd
->
ddev
.
cap_mask
;
...
...
drivers/dma/dma-jz4740.c
浏览文件 @
2cd6f792
...
...
@@ -210,7 +210,7 @@ static enum jz4740_dma_transfer_size jz4740_dma_maxburst(u32 maxburst)
}
static
int
jz4740_dma_slave_config
(
struct
dma_chan
*
c
,
const
struct
dma_slave_config
*
config
)
struct
dma_slave_config
*
config
)
{
struct
jz4740_dmaengine_chan
*
chan
=
to_jz4740_dma_chan
(
c
);
struct
jz4740_dma_dev
*
dmadev
=
jz4740_dma_chan_get_dev
(
chan
);
...
...
@@ -290,21 +290,6 @@ static int jz4740_dma_terminate_all(struct dma_chan *c)
return
0
;
}
static
int
jz4740_dma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
{
struct
dma_slave_config
*
config
=
(
struct
dma_slave_config
*
)
arg
;
switch
(
cmd
)
{
case
DMA_SLAVE_CONFIG
:
return
jz4740_dma_slave_config
(
chan
,
config
);
case
DMA_TERMINATE_ALL
:
return
jz4740_dma_terminate_all
(
chan
);
default:
return
-
ENOSYS
;
}
}
static
int
jz4740_dma_start_transfer
(
struct
jz4740_dmaengine_chan
*
chan
)
{
struct
jz4740_dma_dev
*
dmadev
=
jz4740_dma_chan_get_dev
(
chan
);
...
...
@@ -561,7 +546,8 @@ static int jz4740_dma_probe(struct platform_device *pdev)
dd
->
device_issue_pending
=
jz4740_dma_issue_pending
;
dd
->
device_prep_slave_sg
=
jz4740_dma_prep_slave_sg
;
dd
->
device_prep_dma_cyclic
=
jz4740_dma_prep_dma_cyclic
;
dd
->
device_control
=
jz4740_dma_control
;
dd
->
device_config
=
jz4740_dma_slave_config
;
dd
->
device_terminate_all
=
jz4740_dma_terminate_all
;
dd
->
dev
=
&
pdev
->
dev
;
INIT_LIST_HEAD
(
&
dd
->
channels
);
...
...
drivers/dma/dmaengine.c
浏览文件 @
2cd6f792
...
...
@@ -222,31 +222,35 @@ static void balance_ref_count(struct dma_chan *chan)
*/
static
int
dma_chan_get
(
struct
dma_chan
*
chan
)
{
int
err
=
-
ENODEV
;
struct
module
*
owner
=
dma_chan_to_owner
(
chan
);
int
ret
;
/* The channel is already in use, update client count */
if
(
chan
->
client_count
)
{
__module_get
(
owner
);
err
=
0
;
}
else
if
(
try_module_get
(
owner
))
err
=
0
;
goto
out
;
}
if
(
err
==
0
)
chan
->
client_count
++
;
if
(
!
try_module_get
(
owner
)
)
return
-
ENODEV
;
/* allocate upon first client reference */
if
(
chan
->
client_count
==
1
&&
err
==
0
)
{
int
desc_cnt
=
chan
->
device
->
device_alloc_chan_resources
(
chan
);
if
(
desc_cnt
<
0
)
{
err
=
desc_cnt
;
chan
->
client_count
=
0
;
module_put
(
owner
);
}
else
if
(
!
dma_has_cap
(
DMA_PRIVATE
,
chan
->
device
->
cap_mask
))
balance_ref_count
(
chan
);
if
(
chan
->
device
->
device_alloc_chan_resources
)
{
ret
=
chan
->
device
->
device_alloc_chan_resources
(
chan
);
if
(
ret
<
0
)
goto
err_out
;
}
return
err
;
if
(
!
dma_has_cap
(
DMA_PRIVATE
,
chan
->
device
->
cap_mask
))
balance_ref_count
(
chan
);
out:
chan
->
client_count
++
;
return
0
;
err_out:
module_put
(
owner
);
return
ret
;
}
/**
...
...
@@ -257,11 +261,15 @@ static int dma_chan_get(struct dma_chan *chan)
*/
static
void
dma_chan_put
(
struct
dma_chan
*
chan
)
{
/* This channel is not in use, bail out */
if
(
!
chan
->
client_count
)
return
;
/* this channel failed alloc_chan_resources */
return
;
chan
->
client_count
--
;
module_put
(
dma_chan_to_owner
(
chan
));
if
(
chan
->
client_count
==
0
)
/* This channel is not in use anymore, free it */
if
(
!
chan
->
client_count
&&
chan
->
device
->
device_free_chan_resources
)
chan
->
device
->
device_free_chan_resources
(
chan
);
}
...
...
@@ -471,6 +479,39 @@ static void dma_channel_rebalance(void)
}
}
int
dma_get_slave_caps
(
struct
dma_chan
*
chan
,
struct
dma_slave_caps
*
caps
)
{
struct
dma_device
*
device
;
if
(
!
chan
||
!
caps
)
return
-
EINVAL
;
device
=
chan
->
device
;
/* check if the channel supports slave transactions */
if
(
!
test_bit
(
DMA_SLAVE
,
device
->
cap_mask
.
bits
))
return
-
ENXIO
;
/*
* Check whether it reports it uses the generic slave
* capabilities, if not, that means it doesn't support any
* kind of slave capabilities reporting.
*/
if
(
!
device
->
directions
)
return
-
ENXIO
;
caps
->
src_addr_widths
=
device
->
src_addr_widths
;
caps
->
dst_addr_widths
=
device
->
dst_addr_widths
;
caps
->
directions
=
device
->
directions
;
caps
->
residue_granularity
=
device
->
residue_granularity
;
caps
->
cmd_pause
=
!!
device
->
device_pause
;
caps
->
cmd_terminate
=
!!
device
->
device_terminate_all
;
return
0
;
}
EXPORT_SYMBOL_GPL
(
dma_get_slave_caps
);
static
struct
dma_chan
*
private_candidate
(
const
dma_cap_mask_t
*
mask
,
struct
dma_device
*
dev
,
dma_filter_fn
fn
,
void
*
fn_param
)
...
...
@@ -811,17 +852,16 @@ int dma_async_device_register(struct dma_device *device)
!
device
->
device_prep_dma_sg
);
BUG_ON
(
dma_has_cap
(
DMA_CYCLIC
,
device
->
cap_mask
)
&&
!
device
->
device_prep_dma_cyclic
);
BUG_ON
(
dma_has_cap
(
DMA_SLAVE
,
device
->
cap_mask
)
&&
!
device
->
device_control
);
BUG_ON
(
dma_has_cap
(
DMA_INTERLEAVE
,
device
->
cap_mask
)
&&
!
device
->
device_prep_interleaved_dma
);
BUG_ON
(
!
device
->
device_alloc_chan_resources
);
BUG_ON
(
!
device
->
device_free_chan_resources
);
BUG_ON
(
!
device
->
device_tx_status
);
BUG_ON
(
!
device
->
device_issue_pending
);
BUG_ON
(
!
device
->
dev
);
WARN
(
dma_has_cap
(
DMA_SLAVE
,
device
->
cap_mask
)
&&
!
device
->
directions
,
"this driver doesn't support generic slave capabilities reporting
\n
"
);
/* note: this only matters in the
* CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH=n case
*/
...
...
drivers/dma/dw/core.c
浏览文件 @
2cd6f792
...
...
@@ -61,6 +61,13 @@
*/
#define NR_DESCS_PER_CHANNEL 64
/* The set of bus widths supported by the DMA controller */
#define DW_DMA_BUSWIDTHS \
BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) | \
BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES)
/*----------------------------------------------------------------------*/
static
struct
device
*
chan2dev
(
struct
dma_chan
*
chan
)
...
...
@@ -955,8 +962,7 @@ static inline void convert_burst(u32 *maxburst)
*
maxburst
=
0
;
}
static
int
set_runtime_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
sconfig
)
static
int
dwc_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
sconfig
)
{
struct
dw_dma_chan
*
dwc
=
to_dw_dma_chan
(
chan
);
...
...
@@ -973,16 +979,25 @@ set_runtime_config(struct dma_chan *chan, struct dma_slave_config *sconfig)
return
0
;
}
static
in
line
void
dwc_chan_pause
(
struct
dw_dma_chan
*
dwc
)
static
in
t
dwc_pause
(
struct
dma_chan
*
chan
)
{
u32
cfglo
=
channel_readl
(
dwc
,
CFG_LO
);
unsigned
int
count
=
20
;
/* timeout iterations */
struct
dw_dma_chan
*
dwc
=
to_dw_dma_chan
(
chan
);
unsigned
long
flags
;
unsigned
int
count
=
20
;
/* timeout iterations */
u32
cfglo
;
spin_lock_irqsave
(
&
dwc
->
lock
,
flags
);
cfglo
=
channel_readl
(
dwc
,
CFG_LO
);
channel_writel
(
dwc
,
CFG_LO
,
cfglo
|
DWC_CFGL_CH_SUSP
);
while
(
!
(
channel_readl
(
dwc
,
CFG_LO
)
&
DWC_CFGL_FIFO_EMPTY
)
&&
count
--
)
udelay
(
2
);
dwc
->
paused
=
true
;
spin_unlock_irqrestore
(
&
dwc
->
lock
,
flags
);
return
0
;
}
static
inline
void
dwc_chan_resume
(
struct
dw_dma_chan
*
dwc
)
...
...
@@ -994,53 +1009,48 @@ static inline void dwc_chan_resume(struct dw_dma_chan *dwc)
dwc
->
paused
=
false
;
}
static
int
dwc_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
dwc_resume
(
struct
dma_chan
*
chan
)
{
struct
dw_dma_chan
*
dwc
=
to_dw_dma_chan
(
chan
);
struct
dw_dma
*
dw
=
to_dw_dma
(
chan
->
device
);
struct
dw_desc
*
desc
,
*
_desc
;
unsigned
long
flags
;
LIST_HEAD
(
list
);
if
(
cmd
==
DMA_PAUSE
)
{
spin_lock_irqsave
(
&
dwc
->
lock
,
flags
)
;
if
(
!
dwc
->
paused
)
return
0
;
dwc_chan_pause
(
dwc
);
spin_lock_irqsave
(
&
dwc
->
lock
,
flags
);
spin_unlock_irqrestore
(
&
dwc
->
lock
,
flags
);
}
else
if
(
cmd
==
DMA_RESUME
)
{
if
(
!
dwc
->
paused
)
return
0
;
dwc_chan_resume
(
dwc
);
spin_lock_irqsav
e
(
&
dwc
->
lock
,
flags
);
spin_unlock_irqrestor
e
(
&
dwc
->
lock
,
flags
);
dwc_chan_resume
(
dwc
);
return
0
;
}
spin_unlock_irqrestore
(
&
dwc
->
lock
,
flags
);
}
else
if
(
cmd
==
DMA_TERMINATE_ALL
)
{
spin_lock_irqsave
(
&
dwc
->
lock
,
flags
);
static
int
dwc_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
dw_dma_chan
*
dwc
=
to_dw_dma_chan
(
chan
);
struct
dw_dma
*
dw
=
to_dw_dma
(
chan
->
device
);
struct
dw_desc
*
desc
,
*
_desc
;
unsigned
long
flags
;
LIST_HEAD
(
list
);
clear_bit
(
DW_DMA_IS_SOFT_LLP
,
&
dwc
->
flags
);
spin_lock_irqsave
(
&
dwc
->
lock
,
flags
);
dwc_chan_disable
(
dw
,
dwc
);
clear_bit
(
DW_DMA_IS_SOFT_LLP
,
&
dwc
->
flags
);
dwc_chan_disable
(
dw
,
dwc
);
dwc_chan_resume
(
dwc
);
dwc_chan_resume
(
dwc
);
/* active_list entries will end up before queued entries */
list_splice_init
(
&
dwc
->
queue
,
&
list
);
list_splice_init
(
&
dwc
->
active_list
,
&
list
);
/* active_list entries will end up before queued entries */
list_splice_init
(
&
dwc
->
queue
,
&
list
);
list_splice_init
(
&
dwc
->
active_list
,
&
list
);
spin_unlock_irqrestore
(
&
dwc
->
lock
,
flags
);
spin_unlock_irqrestore
(
&
dwc
->
lock
,
flags
);
/* Flush all pending and queued descriptors */
list_for_each_entry_safe
(
desc
,
_desc
,
&
list
,
desc_node
)
dwc_descriptor_complete
(
dwc
,
desc
,
false
);
}
else
if
(
cmd
==
DMA_SLAVE_CONFIG
)
{
return
set_runtime_config
(
chan
,
(
struct
dma_slave_config
*
)
arg
);
}
else
{
return
-
ENXIO
;
}
/* Flush all pending and queued descriptors */
list_for_each_entry_safe
(
desc
,
_desc
,
&
list
,
desc_node
)
dwc_descriptor_complete
(
dwc
,
desc
,
false
);
return
0
;
}
...
...
@@ -1657,13 +1667,23 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
dw
->
dma
.
device_free_chan_resources
=
dwc_free_chan_resources
;
dw
->
dma
.
device_prep_dma_memcpy
=
dwc_prep_dma_memcpy
;
dw
->
dma
.
device_prep_slave_sg
=
dwc_prep_slave_sg
;
dw
->
dma
.
device_control
=
dwc_control
;
dw
->
dma
.
device_config
=
dwc_config
;
dw
->
dma
.
device_pause
=
dwc_pause
;
dw
->
dma
.
device_resume
=
dwc_resume
;
dw
->
dma
.
device_terminate_all
=
dwc_terminate_all
;
dw
->
dma
.
device_tx_status
=
dwc_tx_status
;
dw
->
dma
.
device_issue_pending
=
dwc_issue_pending
;
/* DMA capabilities */
dw
->
dma
.
src_addr_widths
=
DW_DMA_BUSWIDTHS
;
dw
->
dma
.
dst_addr_widths
=
DW_DMA_BUSWIDTHS
;
dw
->
dma
.
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
)
|
BIT
(
DMA_MEM_TO_MEM
);
dw
->
dma
.
residue_granularity
=
DMA_RESIDUE_GRANULARITY_BURST
;
err
=
dma_async_device_register
(
&
dw
->
dma
);
if
(
err
)
goto
err_dma_register
;
...
...
drivers/dma/dw/regs.h
浏览文件 @
2cd6f792
...
...
@@ -252,7 +252,7 @@ struct dw_dma_chan {
u8
src_master
;
u8
dst_master
;
/* configuration passed via
DMA_SLAVE_CONFIG
*/
/* configuration passed via
.device_config
*/
struct
dma_slave_config
dma_sconfig
;
};
...
...
drivers/dma/edma.c
浏览文件 @
2cd6f792
...
...
@@ -244,8 +244,9 @@ static void edma_execute(struct edma_chan *echan)
}
}
static
int
edma_terminate_all
(
struct
edma_chan
*
e
chan
)
static
int
edma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
edma_chan
*
echan
=
to_edma_chan
(
chan
);
unsigned
long
flags
;
LIST_HEAD
(
head
);
...
...
@@ -273,9 +274,11 @@ static int edma_terminate_all(struct edma_chan *echan)
return
0
;
}
static
int
edma_slave_config
(
struct
edma_chan
*
e
chan
,
static
int
edma_slave_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
cfg
)
{
struct
edma_chan
*
echan
=
to_edma_chan
(
chan
);
if
(
cfg
->
src_addr_width
==
DMA_SLAVE_BUSWIDTH_8_BYTES
||
cfg
->
dst_addr_width
==
DMA_SLAVE_BUSWIDTH_8_BYTES
)
return
-
EINVAL
;
...
...
@@ -285,8 +288,10 @@ static int edma_slave_config(struct edma_chan *echan,
return
0
;
}
static
int
edma_dma_pause
(
struct
edma_chan
*
e
chan
)
static
int
edma_dma_pause
(
struct
dma_chan
*
chan
)
{
struct
edma_chan
*
echan
=
to_edma_chan
(
chan
);
/* Pause/Resume only allowed with cyclic mode */
if
(
!
echan
->
edesc
||
!
echan
->
edesc
->
cyclic
)
return
-
EINVAL
;
...
...
@@ -295,8 +300,10 @@ static int edma_dma_pause(struct edma_chan *echan)
return
0
;
}
static
int
edma_dma_resume
(
struct
edma_chan
*
e
chan
)
static
int
edma_dma_resume
(
struct
dma_chan
*
chan
)
{
struct
edma_chan
*
echan
=
to_edma_chan
(
chan
);
/* Pause/Resume only allowed with cyclic mode */
if
(
!
echan
->
edesc
->
cyclic
)
return
-
EINVAL
;
...
...
@@ -305,36 +312,6 @@ static int edma_dma_resume(struct edma_chan *echan)
return
0
;
}
static
int
edma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
{
int
ret
=
0
;
struct
dma_slave_config
*
config
;
struct
edma_chan
*
echan
=
to_edma_chan
(
chan
);
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
edma_terminate_all
(
echan
);
break
;
case
DMA_SLAVE_CONFIG
:
config
=
(
struct
dma_slave_config
*
)
arg
;
ret
=
edma_slave_config
(
echan
,
config
);
break
;
case
DMA_PAUSE
:
ret
=
edma_dma_pause
(
echan
);
break
;
case
DMA_RESUME
:
ret
=
edma_dma_resume
(
echan
);
break
;
default:
ret
=
-
ENOSYS
;
}
return
ret
;
}
/*
* A PaRAM set configuration abstraction used by other modes
* @chan: Channel who's PaRAM set we're configuring
...
...
@@ -994,19 +971,6 @@ static void __init edma_chan_init(struct edma_cc *ecc,
BIT(DMA_SLAVE_BUSWIDTH_3_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES))
static
int
edma_dma_device_slave_caps
(
struct
dma_chan
*
dchan
,
struct
dma_slave_caps
*
caps
)
{
caps
->
src_addr_widths
=
EDMA_DMA_BUSWIDTHS
;
caps
->
dstn_addr_widths
=
EDMA_DMA_BUSWIDTHS
;
caps
->
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
caps
->
cmd_pause
=
true
;
caps
->
cmd_terminate
=
true
;
caps
->
residue_granularity
=
DMA_RESIDUE_GRANULARITY_BURST
;
return
0
;
}
static
void
edma_dma_init
(
struct
edma_cc
*
ecc
,
struct
dma_device
*
dma
,
struct
device
*
dev
)
{
...
...
@@ -1017,8 +981,16 @@ static void edma_dma_init(struct edma_cc *ecc, struct dma_device *dma,
dma
->
device_free_chan_resources
=
edma_free_chan_resources
;
dma
->
device_issue_pending
=
edma_issue_pending
;
dma
->
device_tx_status
=
edma_tx_status
;
dma
->
device_control
=
edma_control
;
dma
->
device_slave_caps
=
edma_dma_device_slave_caps
;
dma
->
device_config
=
edma_slave_config
;
dma
->
device_pause
=
edma_dma_pause
;
dma
->
device_resume
=
edma_dma_resume
;
dma
->
device_terminate_all
=
edma_terminate_all
;
dma
->
src_addr_widths
=
EDMA_DMA_BUSWIDTHS
;
dma
->
dst_addr_widths
=
EDMA_DMA_BUSWIDTHS
;
dma
->
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
dma
->
residue_granularity
=
DMA_RESIDUE_GRANULARITY_BURST
;
dma
->
dev
=
dev
;
/*
...
...
drivers/dma/ep93xx_dma.c
浏览文件 @
2cd6f792
...
...
@@ -144,7 +144,7 @@ struct ep93xx_dma_desc {
* @queue: pending descriptors which are handled next
* @free_list: list of free descriptors which can be used
* @runtime_addr: physical address currently used as dest/src (M2M only). This
* is set via
%DMA_SLAVE_CONFIG
before slave operation is
* is set via
.device_config
before slave operation is
* prepared
* @runtime_ctrl: M2M runtime values for the control register.
*
...
...
@@ -1164,13 +1164,14 @@ ep93xx_dma_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t dma_addr,
/**
* ep93xx_dma_terminate_all - terminate all transactions
* @
edmac
: channel
* @
chan
: channel
*
* Stops all DMA transactions. All descriptors are put back to the
* @edmac->free_list and callbacks are _not_ called.
*/
static
int
ep93xx_dma_terminate_all
(
struct
ep93xx_dma_chan
*
edmac
)
static
int
ep93xx_dma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
ep93xx_dma_chan
*
edmac
=
to_ep93xx_dma_chan
(
chan
);
struct
ep93xx_dma_desc
*
desc
,
*
_d
;
unsigned
long
flags
;
LIST_HEAD
(
list
);
...
...
@@ -1194,9 +1195,10 @@ static int ep93xx_dma_terminate_all(struct ep93xx_dma_chan *edmac)
return
0
;
}
static
int
ep93xx_dma_slave_config
(
struct
ep93xx_dma_chan
*
edmac
,
static
int
ep93xx_dma_slave_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
config
)
{
struct
ep93xx_dma_chan
*
edmac
=
to_ep93xx_dma_chan
(
chan
);
enum
dma_slave_buswidth
width
;
unsigned
long
flags
;
u32
addr
,
ctrl
;
...
...
@@ -1241,36 +1243,6 @@ static int ep93xx_dma_slave_config(struct ep93xx_dma_chan *edmac,
return
0
;
}
/**
* ep93xx_dma_control - manipulate all pending operations on a channel
* @chan: channel
* @cmd: control command to perform
* @arg: optional argument
*
* Controls the channel. Function returns %0 in case of success or negative
* error in case of failure.
*/
static
int
ep93xx_dma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
{
struct
ep93xx_dma_chan
*
edmac
=
to_ep93xx_dma_chan
(
chan
);
struct
dma_slave_config
*
config
;
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
return
ep93xx_dma_terminate_all
(
edmac
);
case
DMA_SLAVE_CONFIG
:
config
=
(
struct
dma_slave_config
*
)
arg
;
return
ep93xx_dma_slave_config
(
edmac
,
config
);
default:
break
;
}
return
-
ENOSYS
;
}
/**
* ep93xx_dma_tx_status - check if a transaction is completed
* @chan: channel
...
...
@@ -1352,7 +1324,8 @@ static int __init ep93xx_dma_probe(struct platform_device *pdev)
dma_dev
->
device_free_chan_resources
=
ep93xx_dma_free_chan_resources
;
dma_dev
->
device_prep_slave_sg
=
ep93xx_dma_prep_slave_sg
;
dma_dev
->
device_prep_dma_cyclic
=
ep93xx_dma_prep_dma_cyclic
;
dma_dev
->
device_control
=
ep93xx_dma_control
;
dma_dev
->
device_config
=
ep93xx_dma_slave_config
;
dma_dev
->
device_terminate_all
=
ep93xx_dma_terminate_all
;
dma_dev
->
device_issue_pending
=
ep93xx_dma_issue_pending
;
dma_dev
->
device_tx_status
=
ep93xx_dma_tx_status
;
...
...
drivers/dma/fsl-edma.c
浏览文件 @
2cd6f792
...
...
@@ -289,62 +289,69 @@ static void fsl_edma_free_desc(struct virt_dma_desc *vdesc)
kfree
(
fsl_desc
);
}
static
int
fsl_edma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
fsl_edma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
fsl_edma_chan
*
fsl_chan
=
to_fsl_edma_chan
(
chan
);
struct
dma_slave_config
*
cfg
=
(
void
*
)
arg
;
unsigned
long
flags
;
LIST_HEAD
(
head
);
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
spin_lock_irqsave
(
&
fsl_chan
->
vchan
.
lock
,
flags
);
spin_lock_irqsave
(
&
fsl_chan
->
vchan
.
lock
,
flags
);
fsl_edma_disable_request
(
fsl_chan
);
fsl_chan
->
edesc
=
NULL
;
vchan_get_all_descriptors
(
&
fsl_chan
->
vchan
,
&
head
);
spin_unlock_irqrestore
(
&
fsl_chan
->
vchan
.
lock
,
flags
);
vchan_dma_desc_free_list
(
&
fsl_chan
->
vchan
,
&
head
);
return
0
;
}
static
int
fsl_edma_pause
(
struct
dma_chan
*
chan
)
{
struct
fsl_edma_chan
*
fsl_chan
=
to_fsl_edma_chan
(
chan
);
unsigned
long
flags
;
spin_lock_irqsave
(
&
fsl_chan
->
vchan
.
lock
,
flags
);
if
(
fsl_chan
->
edesc
)
{
fsl_edma_disable_request
(
fsl_chan
);
fsl_chan
->
edesc
=
NULL
;
vchan_get_all_descriptors
(
&
fsl_chan
->
vchan
,
&
head
);
spin_unlock_irqrestore
(
&
fsl_chan
->
vchan
.
lock
,
flags
);
vchan_dma_desc_free_list
(
&
fsl_chan
->
vchan
,
&
head
);
return
0
;
case
DMA_SLAVE_CONFIG
:
fsl_chan
->
fsc
.
dir
=
cfg
->
direction
;
if
(
cfg
->
direction
==
DMA_DEV_TO_MEM
)
{
fsl_chan
->
fsc
.
dev_addr
=
cfg
->
src_addr
;
fsl_chan
->
fsc
.
addr_width
=
cfg
->
src_addr_width
;
fsl_chan
->
fsc
.
burst
=
cfg
->
src_maxburst
;
fsl_chan
->
fsc
.
attr
=
fsl_edma_get_tcd_attr
(
cfg
->
src_addr_width
);
}
else
if
(
cfg
->
direction
==
DMA_MEM_TO_DEV
)
{
fsl_chan
->
fsc
.
dev_addr
=
cfg
->
dst_addr
;
fsl_chan
->
fsc
.
addr_width
=
cfg
->
dst_addr_width
;
fsl_chan
->
fsc
.
burst
=
cfg
->
dst_maxburst
;
fsl_chan
->
fsc
.
attr
=
fsl_edma_get_tcd_attr
(
cfg
->
dst_addr_width
);
}
else
{
return
-
EINVAL
;
}
return
0
;
fsl_chan
->
status
=
DMA_PAUSED
;
}
spin_unlock_irqrestore
(
&
fsl_chan
->
vchan
.
lock
,
flags
);
return
0
;
}
case
DMA_PAUSE
:
spin_lock_irqsave
(
&
fsl_chan
->
vchan
.
lock
,
flags
);
if
(
fsl_chan
->
edesc
)
{
fsl_edma_disable_request
(
fsl_chan
);
fsl_chan
->
status
=
DMA_PAUSED
;
}
spin_unlock_irqrestore
(
&
fsl_chan
->
vchan
.
lock
,
flags
);
return
0
;
case
DMA_RESUME
:
spin_lock_irqsave
(
&
fsl_chan
->
vchan
.
lock
,
flags
);
if
(
fsl_chan
->
edesc
)
{
fsl_edma_enable_request
(
fsl_chan
);
fsl_chan
->
status
=
DMA_IN_PROGRESS
;
}
spin_unlock_irqrestore
(
&
fsl_chan
->
vchan
.
lock
,
flags
);
return
0
;
static
int
fsl_edma_resume
(
struct
dma_chan
*
chan
)
{
struct
fsl_edma_chan
*
fsl_chan
=
to_fsl_edma_chan
(
chan
);
unsigned
long
flags
;
default:
return
-
ENXIO
;
spin_lock_irqsave
(
&
fsl_chan
->
vchan
.
lock
,
flags
);
if
(
fsl_chan
->
edesc
)
{
fsl_edma_enable_request
(
fsl_chan
);
fsl_chan
->
status
=
DMA_IN_PROGRESS
;
}
spin_unlock_irqrestore
(
&
fsl_chan
->
vchan
.
lock
,
flags
);
return
0
;
}
static
int
fsl_edma_slave_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
cfg
)
{
struct
fsl_edma_chan
*
fsl_chan
=
to_fsl_edma_chan
(
chan
);
fsl_chan
->
fsc
.
dir
=
cfg
->
direction
;
if
(
cfg
->
direction
==
DMA_DEV_TO_MEM
)
{
fsl_chan
->
fsc
.
dev_addr
=
cfg
->
src_addr
;
fsl_chan
->
fsc
.
addr_width
=
cfg
->
src_addr_width
;
fsl_chan
->
fsc
.
burst
=
cfg
->
src_maxburst
;
fsl_chan
->
fsc
.
attr
=
fsl_edma_get_tcd_attr
(
cfg
->
src_addr_width
);
}
else
if
(
cfg
->
direction
==
DMA_MEM_TO_DEV
)
{
fsl_chan
->
fsc
.
dev_addr
=
cfg
->
dst_addr
;
fsl_chan
->
fsc
.
addr_width
=
cfg
->
dst_addr_width
;
fsl_chan
->
fsc
.
burst
=
cfg
->
dst_maxburst
;
fsl_chan
->
fsc
.
attr
=
fsl_edma_get_tcd_attr
(
cfg
->
dst_addr_width
);
}
else
{
return
-
EINVAL
;
}
return
0
;
}
static
size_t
fsl_edma_desc_residue
(
struct
fsl_edma_chan
*
fsl_chan
,
...
...
@@ -780,18 +787,6 @@ static void fsl_edma_free_chan_resources(struct dma_chan *chan)
fsl_chan
->
tcd_pool
=
NULL
;
}
static
int
fsl_dma_device_slave_caps
(
struct
dma_chan
*
dchan
,
struct
dma_slave_caps
*
caps
)
{
caps
->
src_addr_widths
=
FSL_EDMA_BUSWIDTHS
;
caps
->
dstn_addr_widths
=
FSL_EDMA_BUSWIDTHS
;
caps
->
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
caps
->
cmd_pause
=
true
;
caps
->
cmd_terminate
=
true
;
return
0
;
}
static
int
fsl_edma_irq_init
(
struct
platform_device
*
pdev
,
struct
fsl_edma_engine
*
fsl_edma
)
{
...
...
@@ -917,9 +912,15 @@ static int fsl_edma_probe(struct platform_device *pdev)
fsl_edma
->
dma_dev
.
device_tx_status
=
fsl_edma_tx_status
;
fsl_edma
->
dma_dev
.
device_prep_slave_sg
=
fsl_edma_prep_slave_sg
;
fsl_edma
->
dma_dev
.
device_prep_dma_cyclic
=
fsl_edma_prep_dma_cyclic
;
fsl_edma
->
dma_dev
.
device_control
=
fsl_edma_control
;
fsl_edma
->
dma_dev
.
device_config
=
fsl_edma_slave_config
;
fsl_edma
->
dma_dev
.
device_pause
=
fsl_edma_pause
;
fsl_edma
->
dma_dev
.
device_resume
=
fsl_edma_resume
;
fsl_edma
->
dma_dev
.
device_terminate_all
=
fsl_edma_terminate_all
;
fsl_edma
->
dma_dev
.
device_issue_pending
=
fsl_edma_issue_pending
;
fsl_edma
->
dma_dev
.
device_slave_caps
=
fsl_dma_device_slave_caps
;
fsl_edma
->
dma_dev
.
src_addr_widths
=
FSL_EDMA_BUSWIDTHS
;
fsl_edma
->
dma_dev
.
dst_addr_widths
=
FSL_EDMA_BUSWIDTHS
;
fsl_edma
->
dma_dev
.
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
platform_set_drvdata
(
pdev
,
fsl_edma
);
...
...
drivers/dma/fsldma.c
浏览文件 @
2cd6f792
...
...
@@ -941,84 +941,56 @@ static struct dma_async_tx_descriptor *fsl_dma_prep_sg(struct dma_chan *dchan,
return
NULL
;
}
/**
* fsl_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
* @chan: DMA channel
* @sgl: scatterlist to transfer to/from
* @sg_len: number of entries in @scatterlist
* @direction: DMA direction
* @flags: DMAEngine flags
* @context: transaction context (ignored)
*
* Prepare a set of descriptors for a DMA_SLAVE transaction. Following the
* DMA_SLAVE API, this gets the device-specific information from the
* chan->private variable.
*/
static
struct
dma_async_tx_descriptor
*
fsl_dma_prep_slave_sg
(
struct
dma_chan
*
dchan
,
struct
scatterlist
*
sgl
,
unsigned
int
sg_len
,
enum
dma_transfer_direction
direction
,
unsigned
long
flags
,
void
*
context
)
static
int
fsl_dma_device_terminate_all
(
struct
dma_chan
*
dchan
)
{
/*
* This operation is not supported on the Freescale DMA controller
*
* However, we need to provide the function pointer to allow the
* device_control() method to work.
*/
return
NULL
;
}
static
int
fsl_dma_device_control
(
struct
dma_chan
*
dchan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
{
struct
dma_slave_config
*
config
;
struct
fsldma_chan
*
chan
;
int
size
;
if
(
!
dchan
)
return
-
EINVAL
;
chan
=
to_fsl_chan
(
dchan
);
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
spin_lock_bh
(
&
chan
->
desc_lock
);
/* Halt the DMA engine */
dma_halt
(
chan
);
spin_lock_bh
(
&
chan
->
desc_lock
);
/* Remove and free all of the descriptors in the LD queue */
fsldma_free_desc_list
(
chan
,
&
chan
->
ld_pending
);
fsldma_free_desc_list
(
chan
,
&
chan
->
ld_running
);
fsldma_free_desc_list
(
chan
,
&
chan
->
ld_completed
);
chan
->
idle
=
true
;
/* Halt the DMA engine */
dma_halt
(
chan
);
spin_unlock_bh
(
&
chan
->
desc_lock
);
return
0
;
/* Remove and free all of the descriptors in the LD queue */
fsldma_free_desc_list
(
chan
,
&
chan
->
ld_pending
);
fsldma_free_desc_list
(
chan
,
&
chan
->
ld_running
);
fsldma_free_desc_list
(
chan
,
&
chan
->
ld_completed
);
chan
->
idle
=
true
;
case
DMA_SLAVE_CONFIG
:
config
=
(
struct
dma_slave_config
*
)
arg
;
spin_unlock_bh
(
&
chan
->
desc_lock
);
return
0
;
}
/* make sure the channel supports setting burst size */
if
(
!
chan
->
set_request_count
)
return
-
ENXIO
;
static
int
fsl_dma_device_config
(
struct
dma_chan
*
dchan
,
struct
dma_slave_config
*
config
)
{
struct
fsldma_chan
*
chan
;
int
size
;
/* we set the controller burst size depending on direction */
if
(
config
->
direction
==
DMA_MEM_TO_DEV
)
size
=
config
->
dst_addr_width
*
config
->
dst_maxburst
;
else
size
=
config
->
src_addr_width
*
config
->
src_maxburst
;
if
(
!
dchan
)
return
-
EINVAL
;
chan
->
set_request_count
(
chan
,
size
);
return
0
;
chan
=
to_fsl_chan
(
dchan
);
default:
/* make sure the channel supports setting burst size */
if
(
!
chan
->
set_request_count
)
return
-
ENXIO
;
}
/* we set the controller burst size depending on direction */
if
(
config
->
direction
==
DMA_MEM_TO_DEV
)
size
=
config
->
dst_addr_width
*
config
->
dst_maxburst
;
else
size
=
config
->
src_addr_width
*
config
->
src_maxburst
;
chan
->
set_request_count
(
chan
,
size
);
return
0
;
}
/**
* fsl_dma_memcpy_issue_pending - Issue the DMA start command
* @chan : Freescale DMA channel
...
...
@@ -1395,10 +1367,15 @@ static int fsldma_of_probe(struct platform_device *op)
fdev
->
common
.
device_prep_dma_sg
=
fsl_dma_prep_sg
;
fdev
->
common
.
device_tx_status
=
fsl_tx_status
;
fdev
->
common
.
device_issue_pending
=
fsl_dma_memcpy_issue_pending
;
fdev
->
common
.
device_
prep_slave_sg
=
fsl_dma_prep_slave_s
g
;
fdev
->
common
.
device_
control
=
fsl_dma_device_contro
l
;
fdev
->
common
.
device_
config
=
fsl_dma_device_confi
g
;
fdev
->
common
.
device_
terminate_all
=
fsl_dma_device_terminate_al
l
;
fdev
->
common
.
dev
=
&
op
->
dev
;
fdev
->
common
.
src_addr_widths
=
FSL_DMA_BUSWIDTHS
;
fdev
->
common
.
dst_addr_widths
=
FSL_DMA_BUSWIDTHS
;
fdev
->
common
.
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
fdev
->
common
.
residue_granularity
=
DMA_RESIDUE_GRANULARITY_DESCRIPTOR
;
dma_set_mask
(
&
(
op
->
dev
),
DMA_BIT_MASK
(
36
));
platform_set_drvdata
(
op
,
fdev
);
...
...
drivers/dma/fsldma.h
浏览文件 @
2cd6f792
...
...
@@ -83,6 +83,10 @@
#define FSL_DMA_DGSR_EOSI 0x02
#define FSL_DMA_DGSR_EOLSI 0x01
#define FSL_DMA_BUSWIDTHS (BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES))
typedef
u64
__bitwise
v64
;
typedef
u32
__bitwise
v32
;
...
...
drivers/dma/imx-dma.c
浏览文件 @
2cd6f792
...
...
@@ -664,69 +664,67 @@ static void imxdma_tasklet(unsigned long data)
}
static
int
imxdma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
imxdma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
imxdma_channel
*
imxdmac
=
to_imxdma_chan
(
chan
);
struct
dma_slave_config
*
dmaengine_cfg
=
(
void
*
)
arg
;
struct
imxdma_engine
*
imxdma
=
imxdmac
->
imxdma
;
unsigned
long
flags
;
unsigned
int
mode
=
0
;
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
imxdma_disable_hw
(
imxdmac
);
spin_lock_irqsave
(
&
imxdma
->
lock
,
flags
);
list_splice_tail_init
(
&
imxdmac
->
ld_active
,
&
imxdmac
->
ld_free
);
list_splice_tail_init
(
&
imxdmac
->
ld_queue
,
&
imxdmac
->
ld_free
);
spin_unlock_irqrestore
(
&
imxdma
->
lock
,
flags
);
return
0
;
case
DMA_SLAVE_CONFIG
:
if
(
dmaengine_cfg
->
direction
==
DMA_DEV_TO_MEM
)
{
imxdmac
->
per_address
=
dmaengine_cfg
->
src_addr
;
imxdmac
->
watermark_level
=
dmaengine_cfg
->
src_maxburst
;
imxdmac
->
word_size
=
dmaengine_cfg
->
src_addr_width
;
}
else
{
imxdmac
->
per_address
=
dmaengine_cfg
->
dst_addr
;
imxdmac
->
watermark_level
=
dmaengine_cfg
->
dst_maxburst
;
imxdmac
->
word_size
=
dmaengine_cfg
->
dst_addr_width
;
}
switch
(
imxdmac
->
word_size
)
{
case
DMA_SLAVE_BUSWIDTH_1_BYTE
:
mode
=
IMX_DMA_MEMSIZE_8
;
break
;
case
DMA_SLAVE_BUSWIDTH_2_BYTES
:
mode
=
IMX_DMA_MEMSIZE_16
;
break
;
default:
case
DMA_SLAVE_BUSWIDTH_4_BYTES
:
mode
=
IMX_DMA_MEMSIZE_32
;
break
;
}
imxdma_disable_hw
(
imxdmac
);
imxdmac
->
hw_chaining
=
0
;
spin_lock_irqsave
(
&
imxdma
->
lock
,
flags
);
list_splice_tail_init
(
&
imxdmac
->
ld_active
,
&
imxdmac
->
ld_free
);
list_splice_tail_init
(
&
imxdmac
->
ld_queue
,
&
imxdmac
->
ld_free
);
spin_unlock_irqrestore
(
&
imxdma
->
lock
,
flags
);
return
0
;
}
imxdmac
->
ccr_from_device
=
(
mode
|
IMX_DMA_TYPE_FIFO
)
|
((
IMX_DMA_MEMSIZE_32
|
IMX_DMA_TYPE_LINEAR
)
<<
2
)
|
CCR_REN
;
imxdmac
->
ccr_to_device
=
(
IMX_DMA_MEMSIZE_32
|
IMX_DMA_TYPE_LINEAR
)
|
((
mode
|
IMX_DMA_TYPE_FIFO
)
<<
2
)
|
CCR_REN
;
imx_dmav1_writel
(
imxdma
,
imxdmac
->
dma_request
,
DMA_RSSR
(
imxdmac
->
channel
));
static
int
imxdma_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
dmaengine_cfg
)
{
struct
imxdma_channel
*
imxdmac
=
to_imxdma_chan
(
chan
);
struct
imxdma_engine
*
imxdma
=
imxdmac
->
imxdma
;
unsigned
int
mode
=
0
;
/* Set burst length */
imx_dmav1_writel
(
imxdma
,
imxdmac
->
watermark_level
*
imxdmac
->
word_size
,
DMA_BLR
(
imxdmac
->
channel
));
if
(
dmaengine_cfg
->
direction
==
DMA_DEV_TO_MEM
)
{
imxdmac
->
per_address
=
dmaengine_cfg
->
src_addr
;
imxdmac
->
watermark_level
=
dmaengine_cfg
->
src_maxburst
;
imxdmac
->
word_size
=
dmaengine_cfg
->
src_addr_width
;
}
else
{
imxdmac
->
per_address
=
dmaengine_cfg
->
dst_addr
;
imxdmac
->
watermark_level
=
dmaengine_cfg
->
dst_maxburst
;
imxdmac
->
word_size
=
dmaengine_cfg
->
dst_addr_width
;
}
return
0
;
switch
(
imxdmac
->
word_size
)
{
case
DMA_SLAVE_BUSWIDTH_1_BYTE
:
mode
=
IMX_DMA_MEMSIZE_8
;
break
;
case
DMA_SLAVE_BUSWIDTH_2_BYTES
:
mode
=
IMX_DMA_MEMSIZE_16
;
break
;
default:
return
-
ENOSYS
;
case
DMA_SLAVE_BUSWIDTH_4_BYTES
:
mode
=
IMX_DMA_MEMSIZE_32
;
break
;
}
return
-
EINVAL
;
imxdmac
->
hw_chaining
=
0
;
imxdmac
->
ccr_from_device
=
(
mode
|
IMX_DMA_TYPE_FIFO
)
|
((
IMX_DMA_MEMSIZE_32
|
IMX_DMA_TYPE_LINEAR
)
<<
2
)
|
CCR_REN
;
imxdmac
->
ccr_to_device
=
(
IMX_DMA_MEMSIZE_32
|
IMX_DMA_TYPE_LINEAR
)
|
((
mode
|
IMX_DMA_TYPE_FIFO
)
<<
2
)
|
CCR_REN
;
imx_dmav1_writel
(
imxdma
,
imxdmac
->
dma_request
,
DMA_RSSR
(
imxdmac
->
channel
));
/* Set burst length */
imx_dmav1_writel
(
imxdma
,
imxdmac
->
watermark_level
*
imxdmac
->
word_size
,
DMA_BLR
(
imxdmac
->
channel
));
return
0
;
}
static
enum
dma_status
imxdma_tx_status
(
struct
dma_chan
*
chan
,
...
...
@@ -1179,7 +1177,8 @@ static int __init imxdma_probe(struct platform_device *pdev)
imxdma
->
dma_device
.
device_prep_dma_cyclic
=
imxdma_prep_dma_cyclic
;
imxdma
->
dma_device
.
device_prep_dma_memcpy
=
imxdma_prep_dma_memcpy
;
imxdma
->
dma_device
.
device_prep_interleaved_dma
=
imxdma_prep_dma_interleaved
;
imxdma
->
dma_device
.
device_control
=
imxdma_control
;
imxdma
->
dma_device
.
device_config
=
imxdma_config
;
imxdma
->
dma_device
.
device_terminate_all
=
imxdma_terminate_all
;
imxdma
->
dma_device
.
device_issue_pending
=
imxdma_issue_pending
;
platform_set_drvdata
(
pdev
,
imxdma
);
...
...
drivers/dma/imx-sdma.c
浏览文件 @
2cd6f792
...
...
@@ -830,20 +830,29 @@ static int sdma_load_context(struct sdma_channel *sdmac)
return
ret
;
}
static
void
sdma_disable_channel
(
struct
sdma_channel
*
sdmac
)
static
struct
sdma_channel
*
to_sdma_chan
(
struct
dma_chan
*
chan
)
{
return
container_of
(
chan
,
struct
sdma_channel
,
chan
);
}
static
int
sdma_disable_channel
(
struct
dma_chan
*
chan
)
{
struct
sdma_channel
*
sdmac
=
to_sdma_chan
(
chan
);
struct
sdma_engine
*
sdma
=
sdmac
->
sdma
;
int
channel
=
sdmac
->
channel
;
writel_relaxed
(
BIT
(
channel
),
sdma
->
regs
+
SDMA_H_STATSTOP
);
sdmac
->
status
=
DMA_ERROR
;
return
0
;
}
static
int
sdma_config_channel
(
struct
sdma_channel
*
sdmac
)
static
int
sdma_config_channel
(
struct
dma_chan
*
chan
)
{
struct
sdma_channel
*
sdmac
=
to_sdma_chan
(
chan
);
int
ret
;
sdma_disable_channel
(
sdmac
);
sdma_disable_channel
(
chan
);
sdmac
->
event_mask
[
0
]
=
0
;
sdmac
->
event_mask
[
1
]
=
0
;
...
...
@@ -935,11 +944,6 @@ static int sdma_request_channel(struct sdma_channel *sdmac)
return
ret
;
}
static
struct
sdma_channel
*
to_sdma_chan
(
struct
dma_chan
*
chan
)
{
return
container_of
(
chan
,
struct
sdma_channel
,
chan
);
}
static
dma_cookie_t
sdma_tx_submit
(
struct
dma_async_tx_descriptor
*
tx
)
{
unsigned
long
flags
;
...
...
@@ -1004,7 +1008,7 @@ static void sdma_free_chan_resources(struct dma_chan *chan)
struct
sdma_channel
*
sdmac
=
to_sdma_chan
(
chan
);
struct
sdma_engine
*
sdma
=
sdmac
->
sdma
;
sdma_disable_channel
(
sdmac
);
sdma_disable_channel
(
chan
);
if
(
sdmac
->
event_id0
)
sdma_event_disable
(
sdmac
,
sdmac
->
event_id0
);
...
...
@@ -1203,35 +1207,24 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic(
return
NULL
;
}
static
int
sdma_con
trol
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
ar
g
)
static
int
sdma_con
fig
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
dmaengine_cf
g
)
{
struct
sdma_channel
*
sdmac
=
to_sdma_chan
(
chan
);
struct
dma_slave_config
*
dmaengine_cfg
=
(
void
*
)
arg
;
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
sdma_disable_channel
(
sdmac
);
return
0
;
case
DMA_SLAVE_CONFIG
:
if
(
dmaengine_cfg
->
direction
==
DMA_DEV_TO_MEM
)
{
sdmac
->
per_address
=
dmaengine_cfg
->
src_addr
;
sdmac
->
watermark_level
=
dmaengine_cfg
->
src_maxburst
*
dmaengine_cfg
->
src_addr_width
;
sdmac
->
word_size
=
dmaengine_cfg
->
src_addr_width
;
}
else
{
sdmac
->
per_address
=
dmaengine_cfg
->
dst_addr
;
sdmac
->
watermark_level
=
dmaengine_cfg
->
dst_maxburst
*
dmaengine_cfg
->
dst_addr_width
;
sdmac
->
word_size
=
dmaengine_cfg
->
dst_addr_width
;
}
sdmac
->
direction
=
dmaengine_cfg
->
direction
;
return
sdma_config_channel
(
sdmac
);
default:
return
-
ENOSYS
;
}
return
-
EINVAL
;
if
(
dmaengine_cfg
->
direction
==
DMA_DEV_TO_MEM
)
{
sdmac
->
per_address
=
dmaengine_cfg
->
src_addr
;
sdmac
->
watermark_level
=
dmaengine_cfg
->
src_maxburst
*
dmaengine_cfg
->
src_addr_width
;
sdmac
->
word_size
=
dmaengine_cfg
->
src_addr_width
;
}
else
{
sdmac
->
per_address
=
dmaengine_cfg
->
dst_addr
;
sdmac
->
watermark_level
=
dmaengine_cfg
->
dst_maxburst
*
dmaengine_cfg
->
dst_addr_width
;
sdmac
->
word_size
=
dmaengine_cfg
->
dst_addr_width
;
}
sdmac
->
direction
=
dmaengine_cfg
->
direction
;
return
sdma_config_channel
(
chan
);
}
static
enum
dma_status
sdma_tx_status
(
struct
dma_chan
*
chan
,
...
...
@@ -1479,7 +1472,7 @@ static int sdma_probe(struct platform_device *pdev)
if
(
ret
)
return
ret
;
sdma
=
kzalloc
(
sizeof
(
*
sdma
),
GFP_KERNEL
);
sdma
=
devm_kzalloc
(
&
pdev
->
dev
,
sizeof
(
*
sdma
),
GFP_KERNEL
);
if
(
!
sdma
)
return
-
ENOMEM
;
...
...
@@ -1488,48 +1481,34 @@ static int sdma_probe(struct platform_device *pdev)
sdma
->
dev
=
&
pdev
->
dev
;
sdma
->
drvdata
=
drvdata
;
iores
=
platform_get_resource
(
pdev
,
IORESOURCE_MEM
,
0
);
irq
=
platform_get_irq
(
pdev
,
0
);
if
(
!
iores
||
irq
<
0
)
{
ret
=
-
EINVAL
;
goto
err_irq
;
}
if
(
irq
<
0
)
return
irq
;
i
f
(
!
request_mem_region
(
iores
->
start
,
resource_size
(
iores
),
pdev
->
name
))
{
ret
=
-
EBUSY
;
goto
err_request_region
;
}
i
ores
=
platform_get_resource
(
pdev
,
IORESOURCE_MEM
,
0
);
sdma
->
regs
=
devm_ioremap_resource
(
&
pdev
->
dev
,
iores
)
;
if
(
IS_ERR
(
sdma
->
regs
))
return
PTR_ERR
(
sdma
->
regs
);
sdma
->
clk_ipg
=
devm_clk_get
(
&
pdev
->
dev
,
"ipg"
);
if
(
IS_ERR
(
sdma
->
clk_ipg
))
{
ret
=
PTR_ERR
(
sdma
->
clk_ipg
);
goto
err_clk
;
}
if
(
IS_ERR
(
sdma
->
clk_ipg
))
return
PTR_ERR
(
sdma
->
clk_ipg
);
sdma
->
clk_ahb
=
devm_clk_get
(
&
pdev
->
dev
,
"ahb"
);
if
(
IS_ERR
(
sdma
->
clk_ahb
))
{
ret
=
PTR_ERR
(
sdma
->
clk_ahb
);
goto
err_clk
;
}
if
(
IS_ERR
(
sdma
->
clk_ahb
))
return
PTR_ERR
(
sdma
->
clk_ahb
);
clk_prepare
(
sdma
->
clk_ipg
);
clk_prepare
(
sdma
->
clk_ahb
);
sdma
->
regs
=
ioremap
(
iores
->
start
,
resource_size
(
iores
));
if
(
!
sdma
->
regs
)
{
ret
=
-
ENOMEM
;
goto
err_ioremap
;
}
ret
=
request_irq
(
irq
,
sdma_int_handler
,
0
,
"sdma"
,
sdma
);
ret
=
devm_request_irq
(
&
pdev
->
dev
,
irq
,
sdma_int_handler
,
0
,
"sdma"
,
sdma
);
if
(
ret
)
goto
err_request_irq
;
return
ret
;
sdma
->
script_addrs
=
kzalloc
(
sizeof
(
*
sdma
->
script_addrs
),
GFP_KERNEL
);
if
(
!
sdma
->
script_addrs
)
{
ret
=
-
ENOMEM
;
goto
err_alloc
;
}
if
(
!
sdma
->
script_addrs
)
return
-
ENOMEM
;
/* initially no scripts available */
saddr_arr
=
(
s32
*
)
sdma
->
script_addrs
;
...
...
@@ -1600,7 +1579,12 @@ static int sdma_probe(struct platform_device *pdev)
sdma
->
dma_device
.
device_tx_status
=
sdma_tx_status
;
sdma
->
dma_device
.
device_prep_slave_sg
=
sdma_prep_slave_sg
;
sdma
->
dma_device
.
device_prep_dma_cyclic
=
sdma_prep_dma_cyclic
;
sdma
->
dma_device
.
device_control
=
sdma_control
;
sdma
->
dma_device
.
device_config
=
sdma_config
;
sdma
->
dma_device
.
device_terminate_all
=
sdma_disable_channel
;
sdma
->
dma_device
.
src_addr_widths
=
BIT
(
DMA_SLAVE_BUSWIDTH_4_BYTES
);
sdma
->
dma_device
.
dst_addr_widths
=
BIT
(
DMA_SLAVE_BUSWIDTH_4_BYTES
);
sdma
->
dma_device
.
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
sdma
->
dma_device
.
residue_granularity
=
DMA_RESIDUE_GRANULARITY_BURST
;
sdma
->
dma_device
.
device_issue_pending
=
sdma_issue_pending
;
sdma
->
dma_device
.
dev
->
dma_parms
=
&
sdma
->
dma_parms
;
dma_set_max_seg_size
(
sdma
->
dma_device
.
dev
,
65535
);
...
...
@@ -1629,38 +1613,22 @@ static int sdma_probe(struct platform_device *pdev)
dma_async_device_unregister
(
&
sdma
->
dma_device
);
err_init:
kfree
(
sdma
->
script_addrs
);
err_alloc:
free_irq
(
irq
,
sdma
);
err_request_irq:
iounmap
(
sdma
->
regs
);
err_ioremap:
err_clk:
release_mem_region
(
iores
->
start
,
resource_size
(
iores
));
err_request_region:
err_irq:
kfree
(
sdma
);
return
ret
;
}
static
int
sdma_remove
(
struct
platform_device
*
pdev
)
{
struct
sdma_engine
*
sdma
=
platform_get_drvdata
(
pdev
);
struct
resource
*
iores
=
platform_get_resource
(
pdev
,
IORESOURCE_MEM
,
0
);
int
irq
=
platform_get_irq
(
pdev
,
0
);
int
i
;
dma_async_device_unregister
(
&
sdma
->
dma_device
);
kfree
(
sdma
->
script_addrs
);
free_irq
(
irq
,
sdma
);
iounmap
(
sdma
->
regs
);
release_mem_region
(
iores
->
start
,
resource_size
(
iores
));
/* Kill the tasklet */
for
(
i
=
0
;
i
<
MAX_DMA_CHANNELS
;
i
++
)
{
struct
sdma_channel
*
sdmac
=
&
sdma
->
channel
[
i
];
tasklet_kill
(
&
sdmac
->
tasklet
);
}
kfree
(
sdma
);
platform_set_drvdata
(
pdev
,
NULL
);
dev_info
(
&
pdev
->
dev
,
"Removed...
\n
"
);
...
...
drivers/dma/intel_mid_dma.c
浏览文件 @
2cd6f792
...
...
@@ -492,10 +492,10 @@ static enum dma_status intel_mid_dma_tx_status(struct dma_chan *chan,
return
ret
;
}
static
int
dma_slave_control
(
struct
dma_chan
*
chan
,
unsigned
long
arg
)
static
int
intel_mid_dma_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
slave
)
{
struct
intel_mid_dma_chan
*
midc
=
to_intel_mid_dma_chan
(
chan
);
struct
dma_slave_config
*
slave
=
(
struct
dma_slave_config
*
)
arg
;
struct
intel_mid_dma_slave
*
mid_slave
;
BUG_ON
(
!
midc
);
...
...
@@ -509,28 +509,14 @@ static int dma_slave_control(struct dma_chan *chan, unsigned long arg)
midc
->
mid_slave
=
mid_slave
;
return
0
;
}
/**
* intel_mid_dma_device_control - DMA device control
* @chan: chan for DMA control
* @cmd: control cmd
* @arg: cmd arg value
*
* Perform DMA control command
*/
static
int
intel_mid_dma_device_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
intel_mid_dma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
intel_mid_dma_chan
*
midc
=
to_intel_mid_dma_chan
(
chan
);
struct
middma_device
*
mid
=
to_middma_device
(
chan
->
device
);
struct
intel_mid_dma_desc
*
desc
,
*
_desc
;
union
intel_mid_dma_cfg_lo
cfg_lo
;
if
(
cmd
==
DMA_SLAVE_CONFIG
)
return
dma_slave_control
(
chan
,
arg
);
if
(
cmd
!=
DMA_TERMINATE_ALL
)
return
-
ENXIO
;
spin_lock_bh
(
&
midc
->
lock
);
if
(
midc
->
busy
==
false
)
{
spin_unlock_bh
(
&
midc
->
lock
);
...
...
@@ -1148,7 +1134,8 @@ static int mid_setup_dma(struct pci_dev *pdev)
dma
->
common
.
device_prep_dma_memcpy
=
intel_mid_dma_prep_memcpy
;
dma
->
common
.
device_issue_pending
=
intel_mid_dma_issue_pending
;
dma
->
common
.
device_prep_slave_sg
=
intel_mid_dma_prep_slave_sg
;
dma
->
common
.
device_control
=
intel_mid_dma_device_control
;
dma
->
common
.
device_config
=
intel_mid_dma_config
;
dma
->
common
.
device_terminate_all
=
intel_mid_dma_terminate_all
;
/*enable dma cntrl*/
iowrite32
(
REG_BIT0
,
dma
->
dma_base
+
DMA_CFG
);
...
...
drivers/dma/ipu/ipu_idmac.c
浏览文件 @
2cd6f792
...
...
@@ -1398,76 +1398,81 @@ static void idmac_issue_pending(struct dma_chan *chan)
*/
}
static
int
__idmac_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
idmac_pause
(
struct
dma_chan
*
chan
)
{
struct
idmac_channel
*
ichan
=
to_idmac_chan
(
chan
);
struct
idmac
*
idmac
=
to_idmac
(
chan
->
device
);
struct
ipu
*
ipu
=
to_ipu
(
idmac
);
struct
list_head
*
list
,
*
tmp
;
unsigned
long
flags
;
int
i
;
switch
(
cmd
)
{
case
DMA_PAUSE
:
spin_lock_irqsave
(
&
ipu
->
lock
,
flags
);
ipu_ic_disable_task
(
ipu
,
chan
->
chan_id
);
mutex_lock
(
&
ichan
->
chan_mutex
);
/* Return all descriptors into "prepared" state */
list_for_each_safe
(
list
,
tmp
,
&
ichan
->
queue
)
list_del_init
(
list
);
spin_lock_irqsave
(
&
ipu
->
lock
,
flags
);
ipu_ic_disable_task
(
ipu
,
chan
->
chan_id
);
ichan
->
sg
[
0
]
=
NULL
;
ichan
->
sg
[
1
]
=
NULL
;
/* Return all descriptors into "prepared" state */
list_for_each_safe
(
list
,
tmp
,
&
ichan
->
queue
)
list_del_init
(
list
);
spin_unlock_irqrestore
(
&
ipu
->
lock
,
flags
);
ichan
->
sg
[
0
]
=
NULL
;
ichan
->
sg
[
1
]
=
NULL
;
ichan
->
status
=
IPU_CHANNEL_INITIALIZED
;
break
;
case
DMA_TERMINATE_ALL
:
ipu_disable_channel
(
idmac
,
ichan
,
ichan
->
status
>=
IPU_CHANNEL_ENABLED
);
spin_unlock_irqrestore
(
&
ipu
->
lock
,
flags
);
tasklet_disable
(
&
ipu
->
tasklet
)
;
ichan
->
status
=
IPU_CHANNEL_INITIALIZED
;
/* ichan->queue is modified in ISR, have to spinlock */
spin_lock_irqsave
(
&
ichan
->
lock
,
flags
);
list_splice_init
(
&
ichan
->
queue
,
&
ichan
->
free_list
);
mutex_unlock
(
&
ichan
->
chan_mutex
);
if
(
ichan
->
desc
)
for
(
i
=
0
;
i
<
ichan
->
n_tx_desc
;
i
++
)
{
struct
idmac_tx_desc
*
desc
=
ichan
->
desc
+
i
;
if
(
list_empty
(
&
desc
->
list
))
/* Descriptor was prepared, but not submitted */
list_add
(
&
desc
->
list
,
&
ichan
->
free_list
);
return
0
;
}
async_tx_clear_ack
(
&
desc
->
txd
);
}
static
int
__idmac_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
idmac_channel
*
ichan
=
to_idmac_chan
(
chan
);
struct
idmac
*
idmac
=
to_idmac
(
chan
->
device
);
struct
ipu
*
ipu
=
to_ipu
(
idmac
);
unsigned
long
flags
;
int
i
;
ichan
->
sg
[
0
]
=
NULL
;
ichan
->
sg
[
1
]
=
NULL
;
spin_unlock_irqrestore
(
&
ichan
->
lock
,
flags
);
ipu_disable_channel
(
idmac
,
ichan
,
ichan
->
status
>=
IPU_CHANNEL_ENABLED
);
tasklet_en
able
(
&
ipu
->
tasklet
);
tasklet_dis
able
(
&
ipu
->
tasklet
);
ichan
->
status
=
IPU_CHANNEL_INITIALIZED
;
break
;
default:
return
-
ENOSYS
;
}
/* ichan->queue is modified in ISR, have to spinlock */
spin_lock_irqsave
(
&
ichan
->
lock
,
flags
);
list_splice_init
(
&
ichan
->
queue
,
&
ichan
->
free_list
);
if
(
ichan
->
desc
)
for
(
i
=
0
;
i
<
ichan
->
n_tx_desc
;
i
++
)
{
struct
idmac_tx_desc
*
desc
=
ichan
->
desc
+
i
;
if
(
list_empty
(
&
desc
->
list
))
/* Descriptor was prepared, but not submitted */
list_add
(
&
desc
->
list
,
&
ichan
->
free_list
);
async_tx_clear_ack
(
&
desc
->
txd
);
}
ichan
->
sg
[
0
]
=
NULL
;
ichan
->
sg
[
1
]
=
NULL
;
spin_unlock_irqrestore
(
&
ichan
->
lock
,
flags
);
tasklet_enable
(
&
ipu
->
tasklet
);
ichan
->
status
=
IPU_CHANNEL_INITIALIZED
;
return
0
;
}
static
int
idmac_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
idmac_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
idmac_channel
*
ichan
=
to_idmac_chan
(
chan
);
int
ret
;
mutex_lock
(
&
ichan
->
chan_mutex
);
ret
=
__idmac_
control
(
chan
,
cmd
,
arg
);
ret
=
__idmac_
terminate_all
(
chan
);
mutex_unlock
(
&
ichan
->
chan_mutex
);
...
...
@@ -1568,7 +1573,7 @@ static void idmac_free_chan_resources(struct dma_chan *chan)
mutex_lock
(
&
ichan
->
chan_mutex
);
__idmac_
control
(
chan
,
DMA_TERMINATE_ALL
,
0
);
__idmac_
terminate_all
(
chan
);
if
(
ichan
->
status
>
IPU_CHANNEL_FREE
)
{
#ifdef DEBUG
...
...
@@ -1622,7 +1627,8 @@ static int __init ipu_idmac_init(struct ipu *ipu)
/* Compulsory for DMA_SLAVE fields */
dma
->
device_prep_slave_sg
=
idmac_prep_slave_sg
;
dma
->
device_control
=
idmac_control
;
dma
->
device_pause
=
idmac_pause
;
dma
->
device_terminate_all
=
idmac_terminate_all
;
INIT_LIST_HEAD
(
&
dma
->
channels
);
for
(
i
=
0
;
i
<
IPU_CHANNELS_NUM
;
i
++
)
{
...
...
@@ -1655,7 +1661,7 @@ static void ipu_idmac_exit(struct ipu *ipu)
for
(
i
=
0
;
i
<
IPU_CHANNELS_NUM
;
i
++
)
{
struct
idmac_channel
*
ichan
=
ipu
->
channel
+
i
;
idmac_
control
(
&
ichan
->
dma_chan
,
DMA_TERMINATE_ALL
,
0
);
idmac_
terminate_all
(
&
ichan
->
dma_chan
);
}
dma_async_device_unregister
(
&
idmac
->
dma
);
...
...
drivers/dma/k3dma.c
浏览文件 @
2cd6f792
...
...
@@ -441,7 +441,7 @@ static struct dma_async_tx_descriptor *k3_dma_prep_memcpy(
num
=
0
;
if
(
!
c
->
ccfg
)
{
/* default is memtomem, without calling device_con
trol
*/
/* default is memtomem, without calling device_con
fig
*/
c
->
ccfg
=
CX_CFG_SRCINCR
|
CX_CFG_DSTINCR
|
CX_CFG_EN
;
c
->
ccfg
|=
(
0xf
<<
20
)
|
(
0xf
<<
24
);
/* burst = 16 */
c
->
ccfg
|=
(
0x3
<<
12
)
|
(
0x3
<<
16
);
/* width = 64 bit */
...
...
@@ -523,112 +523,126 @@ static struct dma_async_tx_descriptor *k3_dma_prep_slave_sg(
return
vchan_tx_prep
(
&
c
->
vc
,
&
ds
->
vd
,
flags
);
}
static
int
k3_dma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
k3_dma_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
cfg
)
{
struct
k3_dma_chan
*
c
=
to_k3_chan
(
chan
);
u32
maxburst
=
0
,
val
=
0
;
enum
dma_slave_buswidth
width
=
DMA_SLAVE_BUSWIDTH_UNDEFINED
;
if
(
cfg
==
NULL
)
return
-
EINVAL
;
c
->
dir
=
cfg
->
direction
;
if
(
c
->
dir
==
DMA_DEV_TO_MEM
)
{
c
->
ccfg
=
CX_CFG_DSTINCR
;
c
->
dev_addr
=
cfg
->
src_addr
;
maxburst
=
cfg
->
src_maxburst
;
width
=
cfg
->
src_addr_width
;
}
else
if
(
c
->
dir
==
DMA_MEM_TO_DEV
)
{
c
->
ccfg
=
CX_CFG_SRCINCR
;
c
->
dev_addr
=
cfg
->
dst_addr
;
maxburst
=
cfg
->
dst_maxburst
;
width
=
cfg
->
dst_addr_width
;
}
switch
(
width
)
{
case
DMA_SLAVE_BUSWIDTH_1_BYTE
:
case
DMA_SLAVE_BUSWIDTH_2_BYTES
:
case
DMA_SLAVE_BUSWIDTH_4_BYTES
:
case
DMA_SLAVE_BUSWIDTH_8_BYTES
:
val
=
__ffs
(
width
);
break
;
default:
val
=
3
;
break
;
}
c
->
ccfg
|=
(
val
<<
12
)
|
(
val
<<
16
);
if
((
maxburst
==
0
)
||
(
maxburst
>
16
))
val
=
16
;
else
val
=
maxburst
-
1
;
c
->
ccfg
|=
(
val
<<
20
)
|
(
val
<<
24
);
c
->
ccfg
|=
CX_CFG_MEM2PER
|
CX_CFG_EN
;
/* specific request line */
c
->
ccfg
|=
c
->
vc
.
chan
.
chan_id
<<
4
;
return
0
;
}
static
int
k3_dma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
k3_dma_chan
*
c
=
to_k3_chan
(
chan
);
struct
k3_dma_dev
*
d
=
to_k3_dma
(
chan
->
device
);
struct
dma_slave_config
*
cfg
=
(
void
*
)
arg
;
struct
k3_dma_phy
*
p
=
c
->
phy
;
unsigned
long
flags
;
u32
maxburst
=
0
,
val
=
0
;
enum
dma_slave_buswidth
width
=
DMA_SLAVE_BUSWIDTH_UNDEFINED
;
LIST_HEAD
(
head
);
switch
(
cmd
)
{
case
DMA_SLAVE_CONFIG
:
if
(
cfg
==
NULL
)
return
-
EINVAL
;
c
->
dir
=
cfg
->
direction
;
if
(
c
->
dir
==
DMA_DEV_TO_MEM
)
{
c
->
ccfg
=
CX_CFG_DSTINCR
;
c
->
dev_addr
=
cfg
->
src_addr
;
maxburst
=
cfg
->
src_maxburst
;
width
=
cfg
->
src_addr_width
;
}
else
if
(
c
->
dir
==
DMA_MEM_TO_DEV
)
{
c
->
ccfg
=
CX_CFG_SRCINCR
;
c
->
dev_addr
=
cfg
->
dst_addr
;
maxburst
=
cfg
->
dst_maxburst
;
width
=
cfg
->
dst_addr_width
;
}
switch
(
width
)
{
case
DMA_SLAVE_BUSWIDTH_1_BYTE
:
case
DMA_SLAVE_BUSWIDTH_2_BYTES
:
case
DMA_SLAVE_BUSWIDTH_4_BYTES
:
case
DMA_SLAVE_BUSWIDTH_8_BYTES
:
val
=
__ffs
(
width
);
break
;
default:
val
=
3
;
break
;
}
c
->
ccfg
|=
(
val
<<
12
)
|
(
val
<<
16
);
dev_dbg
(
d
->
slave
.
dev
,
"vchan %p: terminate all
\n
"
,
&
c
->
vc
);
if
((
maxburst
==
0
)
||
(
maxburst
>
16
))
val
=
16
;
else
val
=
maxburst
-
1
;
c
->
ccfg
|=
(
val
<<
20
)
|
(
val
<<
24
);
c
->
ccfg
|=
CX_CFG_MEM2PER
|
CX_CFG_EN
;
/* Prevent this channel being scheduled */
spin_lock
(
&
d
->
lock
);
list_del_init
(
&
c
->
node
);
spin_unlock
(
&
d
->
lock
);
/* specific request line */
c
->
ccfg
|=
c
->
vc
.
chan
.
chan_id
<<
4
;
break
;
/* Clear the tx descriptor lists */
spin_lock_irqsave
(
&
c
->
vc
.
lock
,
flags
);
vchan_get_all_descriptors
(
&
c
->
vc
,
&
head
);
if
(
p
)
{
/* vchan is assigned to a pchan - stop the channel */
k3_dma_terminate_chan
(
p
,
d
);
c
->
phy
=
NULL
;
p
->
vchan
=
NULL
;
p
->
ds_run
=
p
->
ds_done
=
NULL
;
}
spin_unlock_irqrestore
(
&
c
->
vc
.
lock
,
flags
);
vchan_dma_desc_free_list
(
&
c
->
vc
,
&
head
);
case
DMA_TERMINATE_ALL
:
dev_dbg
(
d
->
slave
.
dev
,
"vchan %p: terminate all
\n
"
,
&
c
->
vc
);
return
0
;
}
/* Prevent this channel being scheduled */
spin_lock
(
&
d
->
lock
);
list_del_init
(
&
c
->
node
);
spin_unlock
(
&
d
->
lock
);
static
int
k3_dma_transfer_pause
(
struct
dma_chan
*
chan
)
{
struct
k3_dma_chan
*
c
=
to_k3_chan
(
chan
);
struct
k3_dma_dev
*
d
=
to_k3_dma
(
chan
->
device
);
struct
k3_dma_phy
*
p
=
c
->
phy
;
/* Clear the tx descriptor lists */
spin_lock_irqsave
(
&
c
->
vc
.
lock
,
flags
);
vchan_get_all_descriptors
(
&
c
->
vc
,
&
head
)
;
dev_dbg
(
d
->
slave
.
dev
,
"vchan %p: pause
\n
"
,
&
c
->
vc
);
if
(
c
->
status
==
DMA_IN_PROGRESS
)
{
c
->
status
=
DMA_PAUSED
;
if
(
p
)
{
/* vchan is assigned to a pchan - stop the channel */
k3_dma_terminate_chan
(
p
,
d
);
c
->
phy
=
NULL
;
p
->
vchan
=
NULL
;
p
->
ds_run
=
p
->
ds_done
=
NULL
;
k3_dma_pause_dma
(
p
,
false
);
}
else
{
spin_lock
(
&
d
->
lock
)
;
list_del_init
(
&
c
->
node
)
;
spin_unlock
(
&
d
->
lock
)
;
}
spin_unlock_irqrestore
(
&
c
->
vc
.
lock
,
flags
);
vchan_dma_desc_free_list
(
&
c
->
vc
,
&
head
);
break
;
}
case
DMA_PAUSE
:
dev_dbg
(
d
->
slave
.
dev
,
"vchan %p: pause
\n
"
,
&
c
->
vc
);
if
(
c
->
status
==
DMA_IN_PROGRESS
)
{
c
->
status
=
DMA_PAUSED
;
if
(
p
)
{
k3_dma_pause_dma
(
p
,
false
);
}
else
{
spin_lock
(
&
d
->
lock
);
list_del_init
(
&
c
->
node
);
spin_unlock
(
&
d
->
lock
);
}
}
break
;
return
0
;
}
case
DMA_RESUME
:
dev_dbg
(
d
->
slave
.
dev
,
"vchan %p: resume
\n
"
,
&
c
->
vc
);
spin_lock_irqsave
(
&
c
->
vc
.
lock
,
flags
);
if
(
c
->
status
==
DMA_PAUSED
)
{
c
->
status
=
DMA_IN_PROGRESS
;
if
(
p
)
{
k3_dma_pause_dma
(
p
,
true
);
}
else
if
(
!
list_empty
(
&
c
->
vc
.
desc_issued
))
{
spin_lock
(
&
d
->
lock
);
list_add_tail
(
&
c
->
node
,
&
d
->
chan_pending
);
spin_unlock
(
&
d
->
lock
);
}
static
int
k3_dma_transfer_resume
(
struct
dma_chan
*
chan
)
{
struct
k3_dma_chan
*
c
=
to_k3_chan
(
chan
);
struct
k3_dma_dev
*
d
=
to_k3_dma
(
chan
->
device
);
struct
k3_dma_phy
*
p
=
c
->
phy
;
unsigned
long
flags
;
dev_dbg
(
d
->
slave
.
dev
,
"vchan %p: resume
\n
"
,
&
c
->
vc
);
spin_lock_irqsave
(
&
c
->
vc
.
lock
,
flags
);
if
(
c
->
status
==
DMA_PAUSED
)
{
c
->
status
=
DMA_IN_PROGRESS
;
if
(
p
)
{
k3_dma_pause_dma
(
p
,
true
);
}
else
if
(
!
list_empty
(
&
c
->
vc
.
desc_issued
))
{
spin_lock
(
&
d
->
lock
);
list_add_tail
(
&
c
->
node
,
&
d
->
chan_pending
);
spin_unlock
(
&
d
->
lock
);
}
spin_unlock_irqrestore
(
&
c
->
vc
.
lock
,
flags
);
break
;
default:
return
-
ENXIO
;
}
spin_unlock_irqrestore
(
&
c
->
vc
.
lock
,
flags
);
return
0
;
}
...
...
@@ -720,7 +734,10 @@ static int k3_dma_probe(struct platform_device *op)
d
->
slave
.
device_prep_dma_memcpy
=
k3_dma_prep_memcpy
;
d
->
slave
.
device_prep_slave_sg
=
k3_dma_prep_slave_sg
;
d
->
slave
.
device_issue_pending
=
k3_dma_issue_pending
;
d
->
slave
.
device_control
=
k3_dma_control
;
d
->
slave
.
device_config
=
k3_dma_config
;
d
->
slave
.
device_pause
=
k3_dma_transfer_pause
;
d
->
slave
.
device_resume
=
k3_dma_transfer_resume
;
d
->
slave
.
device_terminate_all
=
k3_dma_terminate_all
;
d
->
slave
.
copy_align
=
DMA_ALIGN
;
/* init virtual channel */
...
...
@@ -787,7 +804,7 @@ static int k3_dma_remove(struct platform_device *op)
}
#ifdef CONFIG_PM_SLEEP
static
int
k3_dma_suspend
(
struct
device
*
dev
)
static
int
k3_dma_suspend
_dev
(
struct
device
*
dev
)
{
struct
k3_dma_dev
*
d
=
dev_get_drvdata
(
dev
);
u32
stat
=
0
;
...
...
@@ -803,7 +820,7 @@ static int k3_dma_suspend(struct device *dev)
return
0
;
}
static
int
k3_dma_resume
(
struct
device
*
dev
)
static
int
k3_dma_resume
_dev
(
struct
device
*
dev
)
{
struct
k3_dma_dev
*
d
=
dev_get_drvdata
(
dev
);
int
ret
=
0
;
...
...
@@ -818,7 +835,7 @@ static int k3_dma_resume(struct device *dev)
}
#endif
static
SIMPLE_DEV_PM_OPS
(
k3_dma_pmops
,
k3_dma_suspend
,
k3_dma_resume
);
static
SIMPLE_DEV_PM_OPS
(
k3_dma_pmops
,
k3_dma_suspend
_dev
,
k3_dma_resume_dev
);
static
struct
platform_driver
k3_pdma_driver
=
{
.
driver
=
{
...
...
drivers/dma/mmp_pdma.c
浏览文件 @
2cd6f792
...
...
@@ -683,68 +683,70 @@ mmp_pdma_prep_dma_cyclic(struct dma_chan *dchan,
return
NULL
;
}
static
int
mmp_pdma_con
trol
(
struct
dma_chan
*
dchan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
ar
g
)
static
int
mmp_pdma_con
fig
(
struct
dma_chan
*
dchan
,
struct
dma_slave_config
*
cf
g
)
{
struct
mmp_pdma_chan
*
chan
=
to_mmp_pdma_chan
(
dchan
);
struct
dma_slave_config
*
cfg
=
(
void
*
)
arg
;
unsigned
long
flags
;
u32
maxburst
=
0
,
addr
=
0
;
enum
dma_slave_buswidth
width
=
DMA_SLAVE_BUSWIDTH_UNDEFINED
;
if
(
!
dchan
)
return
-
EINVAL
;
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
disable_chan
(
chan
->
phy
);
mmp_pdma_free_phy
(
chan
);
spin_lock_irqsave
(
&
chan
->
desc_lock
,
flags
);
mmp_pdma_free_desc_list
(
chan
,
&
chan
->
chain_pending
);
mmp_pdma_free_desc_list
(
chan
,
&
chan
->
chain_running
);
spin_unlock_irqrestore
(
&
chan
->
desc_lock
,
flags
);
chan
->
idle
=
true
;
break
;
case
DMA_SLAVE_CONFIG
:
if
(
cfg
->
direction
==
DMA_DEV_TO_MEM
)
{
chan
->
dcmd
=
DCMD_INCTRGADDR
|
DCMD_FLOWSRC
;
maxburst
=
cfg
->
src_maxburst
;
width
=
cfg
->
src_addr_width
;
addr
=
cfg
->
src_addr
;
}
else
if
(
cfg
->
direction
==
DMA_MEM_TO_DEV
)
{
chan
->
dcmd
=
DCMD_INCSRCADDR
|
DCMD_FLOWTRG
;
maxburst
=
cfg
->
dst_maxburst
;
width
=
cfg
->
dst_addr_width
;
addr
=
cfg
->
dst_addr
;
}
if
(
width
==
DMA_SLAVE_BUSWIDTH_1_BYTE
)
chan
->
dcmd
|=
DCMD_WIDTH1
;
else
if
(
width
==
DMA_SLAVE_BUSWIDTH_2_BYTES
)
chan
->
dcmd
|=
DCMD_WIDTH2
;
else
if
(
width
==
DMA_SLAVE_BUSWIDTH_4_BYTES
)
chan
->
dcmd
|=
DCMD_WIDTH4
;
if
(
maxburst
==
8
)
chan
->
dcmd
|=
DCMD_BURST8
;
else
if
(
maxburst
==
16
)
chan
->
dcmd
|=
DCMD_BURST16
;
else
if
(
maxburst
==
32
)
chan
->
dcmd
|=
DCMD_BURST32
;
chan
->
dir
=
cfg
->
direction
;
chan
->
dev_addr
=
addr
;
/* FIXME: drivers should be ported over to use the filter
* function. Once that's done, the following two lines can
* be removed.
*/
if
(
cfg
->
slave_id
)
chan
->
drcmr
=
cfg
->
slave_id
;
break
;
default:
return
-
ENOSYS
;
if
(
cfg
->
direction
==
DMA_DEV_TO_MEM
)
{
chan
->
dcmd
=
DCMD_INCTRGADDR
|
DCMD_FLOWSRC
;
maxburst
=
cfg
->
src_maxburst
;
width
=
cfg
->
src_addr_width
;
addr
=
cfg
->
src_addr
;
}
else
if
(
cfg
->
direction
==
DMA_MEM_TO_DEV
)
{
chan
->
dcmd
=
DCMD_INCSRCADDR
|
DCMD_FLOWTRG
;
maxburst
=
cfg
->
dst_maxburst
;
width
=
cfg
->
dst_addr_width
;
addr
=
cfg
->
dst_addr
;
}
if
(
width
==
DMA_SLAVE_BUSWIDTH_1_BYTE
)
chan
->
dcmd
|=
DCMD_WIDTH1
;
else
if
(
width
==
DMA_SLAVE_BUSWIDTH_2_BYTES
)
chan
->
dcmd
|=
DCMD_WIDTH2
;
else
if
(
width
==
DMA_SLAVE_BUSWIDTH_4_BYTES
)
chan
->
dcmd
|=
DCMD_WIDTH4
;
if
(
maxburst
==
8
)
chan
->
dcmd
|=
DCMD_BURST8
;
else
if
(
maxburst
==
16
)
chan
->
dcmd
|=
DCMD_BURST16
;
else
if
(
maxburst
==
32
)
chan
->
dcmd
|=
DCMD_BURST32
;
chan
->
dir
=
cfg
->
direction
;
chan
->
dev_addr
=
addr
;
/* FIXME: drivers should be ported over to use the filter
* function. Once that's done, the following two lines can
* be removed.
*/
if
(
cfg
->
slave_id
)
chan
->
drcmr
=
cfg
->
slave_id
;
return
0
;
}
static
int
mmp_pdma_terminate_all
(
struct
dma_chan
*
dchan
)
{
struct
mmp_pdma_chan
*
chan
=
to_mmp_pdma_chan
(
dchan
);
unsigned
long
flags
;
if
(
!
dchan
)
return
-
EINVAL
;
disable_chan
(
chan
->
phy
);
mmp_pdma_free_phy
(
chan
);
spin_lock_irqsave
(
&
chan
->
desc_lock
,
flags
);
mmp_pdma_free_desc_list
(
chan
,
&
chan
->
chain_pending
);
mmp_pdma_free_desc_list
(
chan
,
&
chan
->
chain_running
);
spin_unlock_irqrestore
(
&
chan
->
desc_lock
,
flags
);
chan
->
idle
=
true
;
return
0
;
}
...
...
@@ -1061,7 +1063,8 @@ static int mmp_pdma_probe(struct platform_device *op)
pdev
->
device
.
device_prep_slave_sg
=
mmp_pdma_prep_slave_sg
;
pdev
->
device
.
device_prep_dma_cyclic
=
mmp_pdma_prep_dma_cyclic
;
pdev
->
device
.
device_issue_pending
=
mmp_pdma_issue_pending
;
pdev
->
device
.
device_control
=
mmp_pdma_control
;
pdev
->
device
.
device_config
=
mmp_pdma_config
;
pdev
->
device
.
device_terminate_all
=
mmp_pdma_terminate_all
;
pdev
->
device
.
copy_align
=
PDMA_ALIGNMENT
;
if
(
pdev
->
dev
->
coherent_dma_mask
)
...
...
drivers/dma/mmp_tdma.c
浏览文件 @
2cd6f792
...
...
@@ -19,7 +19,6 @@
#include <linux/dmaengine.h>
#include <linux/platform_device.h>
#include <linux/device.h>
#include <mach/regs-icu.h>
#include <linux/platform_data/dma-mmp_tdma.h>
#include <linux/of_device.h>
#include <linux/of_dma.h>
...
...
@@ -164,33 +163,46 @@ static void mmp_tdma_enable_chan(struct mmp_tdma_chan *tdmac)
tdmac
->
status
=
DMA_IN_PROGRESS
;
}
static
void
mmp_tdma_disable_chan
(
struct
mmp_tdma_chan
*
tdmac
)
static
int
mmp_tdma_disable_chan
(
struct
dma_chan
*
chan
)
{
struct
mmp_tdma_chan
*
tdmac
=
to_mmp_tdma_chan
(
chan
);
writel
(
readl
(
tdmac
->
reg_base
+
TDCR
)
&
~
TDCR_CHANEN
,
tdmac
->
reg_base
+
TDCR
);
tdmac
->
status
=
DMA_COMPLETE
;
return
0
;
}
static
void
mmp_tdma_resume_chan
(
struct
mmp_tdma_chan
*
tdmac
)
static
int
mmp_tdma_resume_chan
(
struct
dma_chan
*
chan
)
{
struct
mmp_tdma_chan
*
tdmac
=
to_mmp_tdma_chan
(
chan
);
writel
(
readl
(
tdmac
->
reg_base
+
TDCR
)
|
TDCR_CHANEN
,
tdmac
->
reg_base
+
TDCR
);
tdmac
->
status
=
DMA_IN_PROGRESS
;
return
0
;
}
static
void
mmp_tdma_pause_chan
(
struct
mmp_tdma_chan
*
tdmac
)
static
int
mmp_tdma_pause_chan
(
struct
dma_chan
*
chan
)
{
struct
mmp_tdma_chan
*
tdmac
=
to_mmp_tdma_chan
(
chan
);
writel
(
readl
(
tdmac
->
reg_base
+
TDCR
)
&
~
TDCR_CHANEN
,
tdmac
->
reg_base
+
TDCR
);
tdmac
->
status
=
DMA_PAUSED
;
return
0
;
}
static
int
mmp_tdma_config_chan
(
struct
mmp_tdma_chan
*
tdmac
)
static
int
mmp_tdma_config_chan
(
struct
dma_chan
*
chan
)
{
struct
mmp_tdma_chan
*
tdmac
=
to_mmp_tdma_chan
(
chan
);
unsigned
int
tdcr
=
0
;
mmp_tdma_disable_chan
(
tdmac
);
mmp_tdma_disable_chan
(
chan
);
if
(
tdmac
->
dir
==
DMA_MEM_TO_DEV
)
tdcr
=
TDCR_DSTDIR_ADDR_HOLD
|
TDCR_SRCDIR_ADDR_INC
;
...
...
@@ -452,42 +464,34 @@ static struct dma_async_tx_descriptor *mmp_tdma_prep_dma_cyclic(
return
NULL
;
}
static
int
mmp_tdma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
mmp_tdma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
mmp_tdma_chan
*
tdmac
=
to_mmp_tdma_chan
(
chan
);
struct
dma_slave_config
*
dmaengine_cfg
=
(
void
*
)
arg
;
int
ret
=
0
;
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
mmp_tdma_disable_chan
(
tdmac
);
/* disable interrupt */
mmp_tdma_enable_irq
(
tdmac
,
false
);
break
;
case
DMA_PAUSE
:
mmp_tdma_pause_chan
(
tdmac
);
break
;
case
DMA_RESUME
:
mmp_tdma_resume_chan
(
tdmac
);
break
;
case
DMA_SLAVE_CONFIG
:
if
(
dmaengine_cfg
->
direction
==
DMA_DEV_TO_MEM
)
{
tdmac
->
dev_addr
=
dmaengine_cfg
->
src_addr
;
tdmac
->
burst_sz
=
dmaengine_cfg
->
src_maxburst
;
tdmac
->
buswidth
=
dmaengine_cfg
->
src_addr_width
;
}
else
{
tdmac
->
dev_addr
=
dmaengine_cfg
->
dst_addr
;
tdmac
->
burst_sz
=
dmaengine_cfg
->
dst_maxburst
;
tdmac
->
buswidth
=
dmaengine_cfg
->
dst_addr_width
;
}
tdmac
->
dir
=
dmaengine_cfg
->
direction
;
return
mmp_tdma_config_chan
(
tdmac
);
default:
ret
=
-
ENOSYS
;
mmp_tdma_disable_chan
(
chan
);
/* disable interrupt */
mmp_tdma_enable_irq
(
tdmac
,
false
);
return
0
;
}
static
int
mmp_tdma_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
dmaengine_cfg
)
{
struct
mmp_tdma_chan
*
tdmac
=
to_mmp_tdma_chan
(
chan
);
if
(
dmaengine_cfg
->
direction
==
DMA_DEV_TO_MEM
)
{
tdmac
->
dev_addr
=
dmaengine_cfg
->
src_addr
;
tdmac
->
burst_sz
=
dmaengine_cfg
->
src_maxburst
;
tdmac
->
buswidth
=
dmaengine_cfg
->
src_addr_width
;
}
else
{
tdmac
->
dev_addr
=
dmaengine_cfg
->
dst_addr
;
tdmac
->
burst_sz
=
dmaengine_cfg
->
dst_maxburst
;
tdmac
->
buswidth
=
dmaengine_cfg
->
dst_addr_width
;
}
tdmac
->
dir
=
dmaengine_cfg
->
direction
;
return
ret
;
return
mmp_tdma_config_chan
(
chan
)
;
}
static
enum
dma_status
mmp_tdma_tx_status
(
struct
dma_chan
*
chan
,
...
...
@@ -668,7 +672,10 @@ static int mmp_tdma_probe(struct platform_device *pdev)
tdev
->
device
.
device_prep_dma_cyclic
=
mmp_tdma_prep_dma_cyclic
;
tdev
->
device
.
device_tx_status
=
mmp_tdma_tx_status
;
tdev
->
device
.
device_issue_pending
=
mmp_tdma_issue_pending
;
tdev
->
device
.
device_control
=
mmp_tdma_control
;
tdev
->
device
.
device_config
=
mmp_tdma_config
;
tdev
->
device
.
device_pause
=
mmp_tdma_pause_chan
;
tdev
->
device
.
device_resume
=
mmp_tdma_resume_chan
;
tdev
->
device
.
device_terminate_all
=
mmp_tdma_terminate_all
;
tdev
->
device
.
copy_align
=
TDMA_ALIGNMENT
;
dma_set_mask
(
&
pdev
->
dev
,
DMA_BIT_MASK
(
64
));
...
...
drivers/dma/moxart-dma.c
浏览文件 @
2cd6f792
...
...
@@ -263,28 +263,6 @@ static int moxart_slave_config(struct dma_chan *chan,
return
0
;
}
static
int
moxart_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
{
int
ret
=
0
;
switch
(
cmd
)
{
case
DMA_PAUSE
:
case
DMA_RESUME
:
return
-
EINVAL
;
case
DMA_TERMINATE_ALL
:
moxart_terminate_all
(
chan
);
break
;
case
DMA_SLAVE_CONFIG
:
ret
=
moxart_slave_config
(
chan
,
(
struct
dma_slave_config
*
)
arg
);
break
;
default:
ret
=
-
ENOSYS
;
}
return
ret
;
}
static
struct
dma_async_tx_descriptor
*
moxart_prep_slave_sg
(
struct
dma_chan
*
chan
,
struct
scatterlist
*
sgl
,
unsigned
int
sg_len
,
enum
dma_transfer_direction
dir
,
...
...
@@ -531,7 +509,8 @@ static void moxart_dma_init(struct dma_device *dma, struct device *dev)
dma
->
device_free_chan_resources
=
moxart_free_chan_resources
;
dma
->
device_issue_pending
=
moxart_issue_pending
;
dma
->
device_tx_status
=
moxart_tx_status
;
dma
->
device_control
=
moxart_control
;
dma
->
device_config
=
moxart_slave_config
;
dma
->
device_terminate_all
=
moxart_terminate_all
;
dma
->
dev
=
dev
;
INIT_LIST_HEAD
(
&
dma
->
channels
);
...
...
drivers/dma/mpc512x_dma.c
浏览文件 @
2cd6f792
...
...
@@ -800,79 +800,69 @@ mpc_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
return
NULL
;
}
static
int
mpc_dma_device_con
trol
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
ar
g
)
static
int
mpc_dma_device_con
fig
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
cf
g
)
{
struct
mpc_dma_chan
*
mchan
;
struct
mpc_dma
*
mdma
;
struct
dma_slave_config
*
cfg
;
struct
mpc_dma_chan
*
mchan
=
dma_chan_to_mpc_dma_chan
(
chan
);
unsigned
long
flags
;
mchan
=
dma_chan_to_mpc_dma_chan
(
chan
);
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
/* Disable channel requests */
mdma
=
dma_chan_to_mpc_dma
(
chan
);
spin_lock_irqsave
(
&
mchan
->
lock
,
flags
);
out_8
(
&
mdma
->
regs
->
dmacerq
,
chan
->
chan_id
);
list_splice_tail_init
(
&
mchan
->
prepared
,
&
mchan
->
free
)
;
list_splice_tail_init
(
&
mchan
->
queued
,
&
mchan
->
free
);
list_splice_tail_init
(
&
mchan
->
active
,
&
mchan
->
free
)
;
spin_unlock_irqrestore
(
&
mchan
->
lock
,
flags
);
/*
* Software constraints:
* - only transfers between a peripheral device and
* memory are supported;
* - only peripheral devices with 4-byte FIFO access register
* are supported;
* - minimal transfer chunk is 4 bytes and consequently
* source and destination addresses must be 4-byte aligned
* and transfer size must be aligned on (4 * maxburst)
* boundary
;
* - during the transfer RAM address is being incremented by
* the size of minimal transfer chunk
;
* - peripheral port's address is constant during the transfer.
*/
return
0
;
if
(
cfg
->
src_addr_width
!=
DMA_SLAVE_BUSWIDTH_4_BYTES
||
cfg
->
dst_addr_width
!=
DMA_SLAVE_BUSWIDTH_4_BYTES
||
!
IS_ALIGNED
(
cfg
->
src_addr
,
4
)
||
!
IS_ALIGNED
(
cfg
->
dst_addr
,
4
))
{
return
-
EINVAL
;
}
case
DMA_SLAVE_CONFIG
:
/*
* Software constraints:
* - only transfers between a peripheral device and
* memory are supported;
* - only peripheral devices with 4-byte FIFO access register
* are supported;
* - minimal transfer chunk is 4 bytes and consequently
* source and destination addresses must be 4-byte aligned
* and transfer size must be aligned on (4 * maxburst)
* boundary;
* - during the transfer RAM address is being incremented by
* the size of minimal transfer chunk;
* - peripheral port's address is constant during the transfer.
*/
spin_lock_irqsave
(
&
mchan
->
lock
,
flags
);
cfg
=
(
void
*
)
arg
;
mchan
->
src_per_paddr
=
cfg
->
src_addr
;
mchan
->
src_tcd_nunits
=
cfg
->
src_maxburst
;
mchan
->
dst_per_paddr
=
cfg
->
dst_addr
;
mchan
->
dst_tcd_nunits
=
cfg
->
dst_maxburst
;
if
(
cfg
->
src_addr_width
!=
DMA_SLAVE_BUSWIDTH_4_BYTES
||
cfg
->
dst_addr_width
!=
DMA_SLAVE_BUSWIDTH_4_BYTES
||
!
IS_ALIGNED
(
cfg
->
src_addr
,
4
)
||
!
IS_ALIGNED
(
cfg
->
dst_addr
,
4
))
{
return
-
EINVAL
;
}
/* Apply defaults */
if
(
mchan
->
src_tcd_nunits
==
0
)
mchan
->
src_tcd_nunits
=
1
;
if
(
mchan
->
dst_tcd_nunits
==
0
)
mchan
->
dst_tcd_nunits
=
1
;
spin_lock_irqsav
e
(
&
mchan
->
lock
,
flags
);
spin_unlock_irqrestor
e
(
&
mchan
->
lock
,
flags
);
mchan
->
src_per_paddr
=
cfg
->
src_addr
;
mchan
->
src_tcd_nunits
=
cfg
->
src_maxburst
;
mchan
->
dst_per_paddr
=
cfg
->
dst_addr
;
mchan
->
dst_tcd_nunits
=
cfg
->
dst_maxburst
;
return
0
;
}
/* Apply defaults */
if
(
mchan
->
src_tcd_nunits
==
0
)
mchan
->
src_tcd_nunits
=
1
;
if
(
mchan
->
dst_tcd_nunits
==
0
)
mchan
->
dst_tcd_nunits
=
1
;
static
int
mpc_dma_device_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
mpc_dma_chan
*
mchan
=
dma_chan_to_mpc_dma_chan
(
chan
)
;
struct
mpc_dma
*
mdma
=
dma_chan_to_mpc_dma
(
chan
);
unsigned
long
flags
;
spin_unlock_irqrestore
(
&
mchan
->
lock
,
flags
);
/* Disable channel requests */
spin_lock_irqsave
(
&
mchan
->
lock
,
flags
);
return
0
;
out_8
(
&
mdma
->
regs
->
dmacerq
,
chan
->
chan_id
);
list_splice_tail_init
(
&
mchan
->
prepared
,
&
mchan
->
free
);
list_splice_tail_init
(
&
mchan
->
queued
,
&
mchan
->
free
);
list_splice_tail_init
(
&
mchan
->
active
,
&
mchan
->
free
);
default:
/* Unknown command */
break
;
}
spin_unlock_irqrestore
(
&
mchan
->
lock
,
flags
);
return
-
ENXIO
;
return
0
;
}
static
int
mpc_dma_probe
(
struct
platform_device
*
op
)
...
...
@@ -963,7 +953,8 @@ static int mpc_dma_probe(struct platform_device *op)
dma
->
device_tx_status
=
mpc_dma_tx_status
;
dma
->
device_prep_dma_memcpy
=
mpc_dma_prep_memcpy
;
dma
->
device_prep_slave_sg
=
mpc_dma_prep_slave_sg
;
dma
->
device_control
=
mpc_dma_device_control
;
dma
->
device_config
=
mpc_dma_device_config
;
dma
->
device_terminate_all
=
mpc_dma_device_terminate_all
;
INIT_LIST_HEAD
(
&
dma
->
channels
);
dma_cap_set
(
DMA_MEMCPY
,
dma
->
cap_mask
);
...
...
drivers/dma/mv_xor.c
浏览文件 @
2cd6f792
...
...
@@ -928,14 +928,6 @@ mv_xor_xor_self_test(struct mv_xor_chan *mv_chan)
return
err
;
}
/* This driver does not implement any of the optional DMA operations. */
static
int
mv_xor_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
{
return
-
ENOSYS
;
}
static
int
mv_xor_channel_remove
(
struct
mv_xor_chan
*
mv_chan
)
{
struct
dma_chan
*
chan
,
*
_chan
;
...
...
@@ -1008,7 +1000,6 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
dma_dev
->
device_free_chan_resources
=
mv_xor_free_chan_resources
;
dma_dev
->
device_tx_status
=
mv_xor_status
;
dma_dev
->
device_issue_pending
=
mv_xor_issue_pending
;
dma_dev
->
device_control
=
mv_xor_control
;
dma_dev
->
dev
=
&
pdev
->
dev
;
/* set prep routines based on capability */
...
...
drivers/dma/mxs-dma.c
浏览文件 @
2cd6f792
...
...
@@ -202,8 +202,9 @@ static struct mxs_dma_chan *to_mxs_dma_chan(struct dma_chan *chan)
return
container_of
(
chan
,
struct
mxs_dma_chan
,
chan
);
}
static
void
mxs_dma_reset_chan
(
struct
mxs_dma_chan
*
mxs_
chan
)
static
void
mxs_dma_reset_chan
(
struct
dma_chan
*
chan
)
{
struct
mxs_dma_chan
*
mxs_chan
=
to_mxs_dma_chan
(
chan
);
struct
mxs_dma_engine
*
mxs_dma
=
mxs_chan
->
mxs_dma
;
int
chan_id
=
mxs_chan
->
chan
.
chan_id
;
...
...
@@ -250,8 +251,9 @@ static void mxs_dma_reset_chan(struct mxs_dma_chan *mxs_chan)
mxs_chan
->
status
=
DMA_COMPLETE
;
}
static
void
mxs_dma_enable_chan
(
struct
mxs_dma_chan
*
mxs_
chan
)
static
void
mxs_dma_enable_chan
(
struct
dma_chan
*
chan
)
{
struct
mxs_dma_chan
*
mxs_chan
=
to_mxs_dma_chan
(
chan
);
struct
mxs_dma_engine
*
mxs_dma
=
mxs_chan
->
mxs_dma
;
int
chan_id
=
mxs_chan
->
chan
.
chan_id
;
...
...
@@ -272,13 +274,16 @@ static void mxs_dma_enable_chan(struct mxs_dma_chan *mxs_chan)
mxs_chan
->
reset
=
false
;
}
static
void
mxs_dma_disable_chan
(
struct
mxs_dma_chan
*
mxs_
chan
)
static
void
mxs_dma_disable_chan
(
struct
dma_chan
*
chan
)
{
struct
mxs_dma_chan
*
mxs_chan
=
to_mxs_dma_chan
(
chan
);
mxs_chan
->
status
=
DMA_COMPLETE
;
}
static
void
mxs_dma_pause_chan
(
struct
mxs_dma_chan
*
mxs_
chan
)
static
int
mxs_dma_pause_chan
(
struct
dma_chan
*
chan
)
{
struct
mxs_dma_chan
*
mxs_chan
=
to_mxs_dma_chan
(
chan
);
struct
mxs_dma_engine
*
mxs_dma
=
mxs_chan
->
mxs_dma
;
int
chan_id
=
mxs_chan
->
chan
.
chan_id
;
...
...
@@ -291,10 +296,12 @@ static void mxs_dma_pause_chan(struct mxs_dma_chan *mxs_chan)
mxs_dma
->
base
+
HW_APBHX_CHANNEL_CTRL
+
STMP_OFFSET_REG_SET
);
mxs_chan
->
status
=
DMA_PAUSED
;
return
0
;
}
static
void
mxs_dma_resume_chan
(
struct
mxs_dma_chan
*
mxs_
chan
)
static
int
mxs_dma_resume_chan
(
struct
dma_chan
*
chan
)
{
struct
mxs_dma_chan
*
mxs_chan
=
to_mxs_dma_chan
(
chan
);
struct
mxs_dma_engine
*
mxs_dma
=
mxs_chan
->
mxs_dma
;
int
chan_id
=
mxs_chan
->
chan
.
chan_id
;
...
...
@@ -307,6 +314,7 @@ static void mxs_dma_resume_chan(struct mxs_dma_chan *mxs_chan)
mxs_dma
->
base
+
HW_APBHX_CHANNEL_CTRL
+
STMP_OFFSET_REG_CLR
);
mxs_chan
->
status
=
DMA_IN_PROGRESS
;
return
0
;
}
static
dma_cookie_t
mxs_dma_tx_submit
(
struct
dma_async_tx_descriptor
*
tx
)
...
...
@@ -383,7 +391,7 @@ static irqreturn_t mxs_dma_int_handler(int irq, void *dev_id)
"%s: error in channel %d
\n
"
,
__func__
,
chan
);
mxs_chan
->
status
=
DMA_ERROR
;
mxs_dma_reset_chan
(
mxs_
chan
);
mxs_dma_reset_chan
(
&
mxs_chan
->
chan
);
}
else
if
(
mxs_chan
->
status
!=
DMA_COMPLETE
)
{
if
(
mxs_chan
->
flags
&
MXS_DMA_SG_LOOP
)
{
mxs_chan
->
status
=
DMA_IN_PROGRESS
;
...
...
@@ -432,7 +440,7 @@ static int mxs_dma_alloc_chan_resources(struct dma_chan *chan)
if
(
ret
)
goto
err_clk
;
mxs_dma_reset_chan
(
mxs_
chan
);
mxs_dma_reset_chan
(
chan
);
dma_async_tx_descriptor_init
(
&
mxs_chan
->
desc
,
chan
);
mxs_chan
->
desc
.
tx_submit
=
mxs_dma_tx_submit
;
...
...
@@ -456,7 +464,7 @@ static void mxs_dma_free_chan_resources(struct dma_chan *chan)
struct
mxs_dma_chan
*
mxs_chan
=
to_mxs_dma_chan
(
chan
);
struct
mxs_dma_engine
*
mxs_dma
=
mxs_chan
->
mxs_dma
;
mxs_dma_disable_chan
(
mxs_
chan
);
mxs_dma_disable_chan
(
chan
);
free_irq
(
mxs_chan
->
chan_irq
,
mxs_dma
);
...
...
@@ -651,28 +659,12 @@ static struct dma_async_tx_descriptor *mxs_dma_prep_dma_cyclic(
return
NULL
;
}
static
int
mxs_dma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
mxs_dma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
mxs_dma_chan
*
mxs_chan
=
to_mxs_dma_chan
(
chan
);
int
ret
=
0
;
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
mxs_dma_reset_chan
(
mxs_chan
);
mxs_dma_disable_chan
(
mxs_chan
);
break
;
case
DMA_PAUSE
:
mxs_dma_pause_chan
(
mxs_chan
);
break
;
case
DMA_RESUME
:
mxs_dma_resume_chan
(
mxs_chan
);
break
;
default:
ret
=
-
ENOSYS
;
}
mxs_dma_reset_chan
(
chan
);
mxs_dma_disable_chan
(
chan
);
return
ret
;
return
0
;
}
static
enum
dma_status
mxs_dma_tx_status
(
struct
dma_chan
*
chan
,
...
...
@@ -701,13 +693,6 @@ static enum dma_status mxs_dma_tx_status(struct dma_chan *chan,
return
mxs_chan
->
status
;
}
static
void
mxs_dma_issue_pending
(
struct
dma_chan
*
chan
)
{
struct
mxs_dma_chan
*
mxs_chan
=
to_mxs_dma_chan
(
chan
);
mxs_dma_enable_chan
(
mxs_chan
);
}
static
int
__init
mxs_dma_init
(
struct
mxs_dma_engine
*
mxs_dma
)
{
int
ret
;
...
...
@@ -860,8 +845,14 @@ static int __init mxs_dma_probe(struct platform_device *pdev)
mxs_dma
->
dma_device
.
device_tx_status
=
mxs_dma_tx_status
;
mxs_dma
->
dma_device
.
device_prep_slave_sg
=
mxs_dma_prep_slave_sg
;
mxs_dma
->
dma_device
.
device_prep_dma_cyclic
=
mxs_dma_prep_dma_cyclic
;
mxs_dma
->
dma_device
.
device_control
=
mxs_dma_control
;
mxs_dma
->
dma_device
.
device_issue_pending
=
mxs_dma_issue_pending
;
mxs_dma
->
dma_device
.
device_pause
=
mxs_dma_pause_chan
;
mxs_dma
->
dma_device
.
device_resume
=
mxs_dma_resume_chan
;
mxs_dma
->
dma_device
.
device_terminate_all
=
mxs_dma_terminate_all
;
mxs_dma
->
dma_device
.
src_addr_widths
=
BIT
(
DMA_SLAVE_BUSWIDTH_4_BYTES
);
mxs_dma
->
dma_device
.
dst_addr_widths
=
BIT
(
DMA_SLAVE_BUSWIDTH_4_BYTES
);
mxs_dma
->
dma_device
.
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
mxs_dma
->
dma_device
.
residue_granularity
=
DMA_RESIDUE_GRANULARITY_BURST
;
mxs_dma
->
dma_device
.
device_issue_pending
=
mxs_dma_enable_chan
;
ret
=
dma_async_device_register
(
&
mxs_dma
->
dma_device
);
if
(
ret
)
{
...
...
drivers/dma/nbpfaxi.c
浏览文件 @
2cd6f792
...
...
@@ -504,7 +504,7 @@ static int nbpf_prep_one(struct nbpf_link_desc *ldesc,
* pauses DMA and reads out data received via DMA as well as those left
* in the Rx FIFO. For this to work with the RAM side using burst
* transfers we enable the SBE bit and terminate the transfer in our
*
DMA_PAUSE
handler.
*
.device_pause
handler.
*/
mem_xfer
=
nbpf_xfer_ds
(
chan
->
nbpf
,
size
);
...
...
@@ -565,13 +565,6 @@ static void nbpf_configure(struct nbpf_device *nbpf)
nbpf_write
(
nbpf
,
NBPF_CTRL
,
NBPF_CTRL_LVINT
);
}
static
void
nbpf_pause
(
struct
nbpf_channel
*
chan
)
{
nbpf_chan_write
(
chan
,
NBPF_CHAN_CTRL
,
NBPF_CHAN_CTRL_SETSUS
);
/* See comment in nbpf_prep_one() */
nbpf_chan_write
(
chan
,
NBPF_CHAN_CTRL
,
NBPF_CHAN_CTRL_CLREN
);
}
/* Generic part */
/* DMA ENGINE functions */
...
...
@@ -837,54 +830,58 @@ static void nbpf_chan_idle(struct nbpf_channel *chan)
}
}
static
int
nbpf_control
(
struct
dma_chan
*
dchan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
nbpf_pause
(
struct
dma_chan
*
dchan
)
{
struct
nbpf_channel
*
chan
=
nbpf_to_chan
(
dchan
);
struct
dma_slave_config
*
config
;
dev_dbg
(
dchan
->
device
->
dev
,
"Entry %s
(%d)
\n
"
,
__func__
,
cmd
);
dev_dbg
(
dchan
->
device
->
dev
,
"Entry %s
\n
"
,
__func__
);
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
dev_dbg
(
dchan
->
device
->
dev
,
"Terminating
\n
"
);
nbpf_chan_halt
(
chan
);
nbpf_chan_idle
(
chan
);
break
;
chan
->
paused
=
true
;
nbpf_chan_write
(
chan
,
NBPF_CHAN_CTRL
,
NBPF_CHAN_CTRL_SETSUS
);
/* See comment in nbpf_prep_one() */
nbpf_chan_write
(
chan
,
NBPF_CHAN_CTRL
,
NBPF_CHAN_CTRL_CLREN
);
case
DMA_SLAVE_CONFIG
:
if
(
!
arg
)
return
-
EINVAL
;
config
=
(
struct
dma_slave_config
*
)
arg
;
return
0
;
}
/*
* We could check config->slave_id to match chan->terminal here,
* but with DT they would be coming from the same source, so
* such a check would be superflous
*/
static
int
nbpf_terminate_all
(
struct
dma_chan
*
dchan
)
{
struct
nbpf_channel
*
chan
=
nbpf_to_chan
(
dchan
);
chan
->
slave_dst_addr
=
config
->
dst_addr
;
chan
->
slave_dst_width
=
nbpf_xfer_size
(
chan
->
nbpf
,
config
->
dst_addr_width
,
1
);
chan
->
slave_dst_burst
=
nbpf_xfer_size
(
chan
->
nbpf
,
config
->
dst_addr_width
,
config
->
dst_maxburst
);
chan
->
slave_src_addr
=
config
->
src_addr
;
chan
->
slave_src_width
=
nbpf_xfer_size
(
chan
->
nbpf
,
config
->
src_addr_width
,
1
);
chan
->
slave_src_burst
=
nbpf_xfer_size
(
chan
->
nbpf
,
config
->
src_addr_width
,
config
->
src_maxburst
);
break
;
dev_dbg
(
dchan
->
device
->
dev
,
"Entry %s
\n
"
,
__func__
);
dev_dbg
(
dchan
->
device
->
dev
,
"Terminating
\n
"
);
case
DMA_PAUSE
:
chan
->
paused
=
true
;
nbpf_pause
(
chan
);
break
;
nbpf_chan_halt
(
chan
);
nbpf_chan_idle
(
chan
);
default:
return
-
ENXIO
;
}
return
0
;
}
static
int
nbpf_config
(
struct
dma_chan
*
dchan
,
struct
dma_slave_config
*
config
)
{
struct
nbpf_channel
*
chan
=
nbpf_to_chan
(
dchan
);
dev_dbg
(
dchan
->
device
->
dev
,
"Entry %s
\n
"
,
__func__
);
/*
* We could check config->slave_id to match chan->terminal here,
* but with DT they would be coming from the same source, so
* such a check would be superflous
*/
chan
->
slave_dst_addr
=
config
->
dst_addr
;
chan
->
slave_dst_width
=
nbpf_xfer_size
(
chan
->
nbpf
,
config
->
dst_addr_width
,
1
);
chan
->
slave_dst_burst
=
nbpf_xfer_size
(
chan
->
nbpf
,
config
->
dst_addr_width
,
config
->
dst_maxburst
);
chan
->
slave_src_addr
=
config
->
src_addr
;
chan
->
slave_src_width
=
nbpf_xfer_size
(
chan
->
nbpf
,
config
->
src_addr_width
,
1
);
chan
->
slave_src_burst
=
nbpf_xfer_size
(
chan
->
nbpf
,
config
->
src_addr_width
,
config
->
src_maxburst
);
return
0
;
}
...
...
@@ -1072,18 +1069,6 @@ static void nbpf_free_chan_resources(struct dma_chan *dchan)
}
}
static
int
nbpf_slave_caps
(
struct
dma_chan
*
dchan
,
struct
dma_slave_caps
*
caps
)
{
caps
->
src_addr_widths
=
NBPF_DMA_BUSWIDTHS
;
caps
->
dstn_addr_widths
=
NBPF_DMA_BUSWIDTHS
;
caps
->
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
caps
->
cmd_pause
=
false
;
caps
->
cmd_terminate
=
true
;
return
0
;
}
static
struct
dma_chan
*
nbpf_of_xlate
(
struct
of_phandle_args
*
dma_spec
,
struct
of_dma
*
ofdma
)
{
...
...
@@ -1414,7 +1399,6 @@ static int nbpf_probe(struct platform_device *pdev)
dma_dev
->
device_prep_dma_memcpy
=
nbpf_prep_memcpy
;
dma_dev
->
device_tx_status
=
nbpf_tx_status
;
dma_dev
->
device_issue_pending
=
nbpf_issue_pending
;
dma_dev
->
device_slave_caps
=
nbpf_slave_caps
;
/*
* If we drop support for unaligned MEMCPY buffer addresses and / or
...
...
@@ -1426,7 +1410,13 @@ static int nbpf_probe(struct platform_device *pdev)
/* Compulsory for DMA_SLAVE fields */
dma_dev
->
device_prep_slave_sg
=
nbpf_prep_slave_sg
;
dma_dev
->
device_control
=
nbpf_control
;
dma_dev
->
device_config
=
nbpf_config
;
dma_dev
->
device_pause
=
nbpf_pause
;
dma_dev
->
device_terminate_all
=
nbpf_terminate_all
;
dma_dev
->
src_addr_widths
=
NBPF_DMA_BUSWIDTHS
;
dma_dev
->
dst_addr_widths
=
NBPF_DMA_BUSWIDTHS
;
dma_dev
->
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
platform_set_drvdata
(
pdev
,
nbpf
);
...
...
drivers/dma/omap-dma.c
浏览文件 @
2cd6f792
...
...
@@ -948,8 +948,10 @@ static struct dma_async_tx_descriptor *omap_dma_prep_dma_cyclic(
return
vchan_tx_prep
(
&
c
->
vc
,
&
d
->
vd
,
flags
);
}
static
int
omap_dma_slave_config
(
struct
omap_chan
*
c
,
struct
dma_slave_config
*
cfg
)
static
int
omap_dma_slave_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
cfg
)
{
struct
omap_chan
*
c
=
to_omap_dma_chan
(
chan
);
if
(
cfg
->
src_addr_width
==
DMA_SLAVE_BUSWIDTH_8_BYTES
||
cfg
->
dst_addr_width
==
DMA_SLAVE_BUSWIDTH_8_BYTES
)
return
-
EINVAL
;
...
...
@@ -959,8 +961,9 @@ static int omap_dma_slave_config(struct omap_chan *c, struct dma_slave_config *c
return
0
;
}
static
int
omap_dma_terminate_all
(
struct
omap_chan
*
c
)
static
int
omap_dma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
omap_chan
*
c
=
to_omap_dma_chan
(
chan
);
struct
omap_dmadev
*
d
=
to_omap_dma_dev
(
c
->
vc
.
chan
.
device
);
unsigned
long
flags
;
LIST_HEAD
(
head
);
...
...
@@ -996,8 +999,10 @@ static int omap_dma_terminate_all(struct omap_chan *c)
return
0
;
}
static
int
omap_dma_pause
(
struct
omap_chan
*
c
)
static
int
omap_dma_pause
(
struct
dma_chan
*
chan
)
{
struct
omap_chan
*
c
=
to_omap_dma_chan
(
chan
);
/* Pause/Resume only allowed with cyclic mode */
if
(
!
c
->
cyclic
)
return
-
EINVAL
;
...
...
@@ -1010,8 +1015,10 @@ static int omap_dma_pause(struct omap_chan *c)
return
0
;
}
static
int
omap_dma_resume
(
struct
omap_chan
*
c
)
static
int
omap_dma_resume
(
struct
dma_chan
*
chan
)
{
struct
omap_chan
*
c
=
to_omap_dma_chan
(
chan
);
/* Pause/Resume only allowed with cyclic mode */
if
(
!
c
->
cyclic
)
return
-
EINVAL
;
...
...
@@ -1029,37 +1036,6 @@ static int omap_dma_resume(struct omap_chan *c)
return
0
;
}
static
int
omap_dma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
{
struct
omap_chan
*
c
=
to_omap_dma_chan
(
chan
);
int
ret
;
switch
(
cmd
)
{
case
DMA_SLAVE_CONFIG
:
ret
=
omap_dma_slave_config
(
c
,
(
struct
dma_slave_config
*
)
arg
);
break
;
case
DMA_TERMINATE_ALL
:
ret
=
omap_dma_terminate_all
(
c
);
break
;
case
DMA_PAUSE
:
ret
=
omap_dma_pause
(
c
);
break
;
case
DMA_RESUME
:
ret
=
omap_dma_resume
(
c
);
break
;
default:
ret
=
-
ENXIO
;
break
;
}
return
ret
;
}
static
int
omap_dma_chan_init
(
struct
omap_dmadev
*
od
,
int
dma_sig
)
{
struct
omap_chan
*
c
;
...
...
@@ -1094,19 +1070,6 @@ static void omap_dma_free(struct omap_dmadev *od)
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES))
static
int
omap_dma_device_slave_caps
(
struct
dma_chan
*
dchan
,
struct
dma_slave_caps
*
caps
)
{
caps
->
src_addr_widths
=
OMAP_DMA_BUSWIDTHS
;
caps
->
dstn_addr_widths
=
OMAP_DMA_BUSWIDTHS
;
caps
->
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
caps
->
cmd_pause
=
true
;
caps
->
cmd_terminate
=
true
;
caps
->
residue_granularity
=
DMA_RESIDUE_GRANULARITY_BURST
;
return
0
;
}
static
int
omap_dma_probe
(
struct
platform_device
*
pdev
)
{
struct
omap_dmadev
*
od
;
...
...
@@ -1136,8 +1099,14 @@ static int omap_dma_probe(struct platform_device *pdev)
od
->
ddev
.
device_issue_pending
=
omap_dma_issue_pending
;
od
->
ddev
.
device_prep_slave_sg
=
omap_dma_prep_slave_sg
;
od
->
ddev
.
device_prep_dma_cyclic
=
omap_dma_prep_dma_cyclic
;
od
->
ddev
.
device_control
=
omap_dma_control
;
od
->
ddev
.
device_slave_caps
=
omap_dma_device_slave_caps
;
od
->
ddev
.
device_config
=
omap_dma_slave_config
;
od
->
ddev
.
device_pause
=
omap_dma_pause
;
od
->
ddev
.
device_resume
=
omap_dma_resume
;
od
->
ddev
.
device_terminate_all
=
omap_dma_terminate_all
;
od
->
ddev
.
src_addr_widths
=
OMAP_DMA_BUSWIDTHS
;
od
->
ddev
.
dst_addr_widths
=
OMAP_DMA_BUSWIDTHS
;
od
->
ddev
.
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
od
->
ddev
.
residue_granularity
=
DMA_RESIDUE_GRANULARITY_BURST
;
od
->
ddev
.
dev
=
&
pdev
->
dev
;
INIT_LIST_HEAD
(
&
od
->
ddev
.
channels
);
INIT_LIST_HEAD
(
&
od
->
pending
);
...
...
drivers/dma/pch_dma.c
浏览文件 @
2cd6f792
...
...
@@ -665,16 +665,12 @@ static struct dma_async_tx_descriptor *pd_prep_slave_sg(struct dma_chan *chan,
return
NULL
;
}
static
int
pd_device_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
pd_device_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
pch_dma_chan
*
pd_chan
=
to_pd_chan
(
chan
);
struct
pch_dma_desc
*
desc
,
*
_d
;
LIST_HEAD
(
list
);
if
(
cmd
!=
DMA_TERMINATE_ALL
)
return
-
ENXIO
;
spin_lock_irq
(
&
pd_chan
->
lock
);
pdc_set_mode
(
&
pd_chan
->
chan
,
DMA_CTL0_DISABLE
);
...
...
@@ -932,7 +928,7 @@ static int pch_dma_probe(struct pci_dev *pdev,
pd
->
dma
.
device_tx_status
=
pd_tx_status
;
pd
->
dma
.
device_issue_pending
=
pd_issue_pending
;
pd
->
dma
.
device_prep_slave_sg
=
pd_prep_slave_sg
;
pd
->
dma
.
device_
control
=
pd_device_contro
l
;
pd
->
dma
.
device_
terminate_all
=
pd_device_terminate_al
l
;
err
=
dma_async_device_register
(
&
pd
->
dma
);
if
(
err
)
{
...
...
drivers/dma/pl330.c
浏览文件 @
2cd6f792
...
...
@@ -2086,78 +2086,63 @@ static int pl330_alloc_chan_resources(struct dma_chan *chan)
return
1
;
}
static
int
pl330_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
pl330_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
slave_config
)
{
struct
dma_pl330_chan
*
pch
=
to_pchan
(
chan
);
if
(
slave_config
->
direction
==
DMA_MEM_TO_DEV
)
{
if
(
slave_config
->
dst_addr
)
pch
->
fifo_addr
=
slave_config
->
dst_addr
;
if
(
slave_config
->
dst_addr_width
)
pch
->
burst_sz
=
__ffs
(
slave_config
->
dst_addr_width
);
if
(
slave_config
->
dst_maxburst
)
pch
->
burst_len
=
slave_config
->
dst_maxburst
;
}
else
if
(
slave_config
->
direction
==
DMA_DEV_TO_MEM
)
{
if
(
slave_config
->
src_addr
)
pch
->
fifo_addr
=
slave_config
->
src_addr
;
if
(
slave_config
->
src_addr_width
)
pch
->
burst_sz
=
__ffs
(
slave_config
->
src_addr_width
);
if
(
slave_config
->
src_maxburst
)
pch
->
burst_len
=
slave_config
->
src_maxburst
;
}
return
0
;
}
static
int
pl330_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
dma_pl330_chan
*
pch
=
to_pchan
(
chan
);
struct
dma_pl330_desc
*
desc
;
unsigned
long
flags
;
struct
pl330_dmac
*
pl330
=
pch
->
dmac
;
struct
dma_slave_config
*
slave_config
;
LIST_HEAD
(
list
);
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
pm_runtime_get_sync
(
pl330
->
ddma
.
dev
);
spin_lock_irqsave
(
&
pch
->
lock
,
flags
);
spin_lock
(
&
pl330
->
lock
);
_stop
(
pch
->
thread
);
spin_unlock
(
&
pl330
->
lock
);
pch
->
thread
->
req
[
0
].
desc
=
NULL
;
pch
->
thread
->
req
[
1
].
desc
=
NULL
;
pch
->
thread
->
req_running
=
-
1
;
/* Mark all desc done */
list_for_each_entry
(
desc
,
&
pch
->
submitted_list
,
node
)
{
desc
->
status
=
FREE
;
dma_cookie_complete
(
&
desc
->
txd
);
}
list_for_each_entry
(
desc
,
&
pch
->
work_list
,
node
)
{
desc
->
status
=
FREE
;
dma_cookie_complete
(
&
desc
->
txd
);
}
list_for_each_entry
(
desc
,
&
pch
->
completed_list
,
node
)
{
desc
->
status
=
FREE
;
dma_cookie_complete
(
&
desc
->
txd
);
}
if
(
!
list_empty
(
&
pch
->
work_list
))
pm_runtime_put
(
pl330
->
ddma
.
dev
);
spin_lock_irqsave
(
&
pch
->
lock
,
flags
);
spin_lock
(
&
pl330
->
lock
);
_stop
(
pch
->
thread
);
spin_unlock
(
&
pl330
->
lock
);
pch
->
thread
->
req
[
0
].
desc
=
NULL
;
pch
->
thread
->
req
[
1
].
desc
=
NULL
;
pch
->
thread
->
req_running
=
-
1
;
/* Mark all desc done */
list_for_each_entry
(
desc
,
&
pch
->
submitted_list
,
node
)
{
desc
->
status
=
FREE
;
dma_cookie_complete
(
&
desc
->
txd
);
}
list_splice_tail_init
(
&
pch
->
submitted_list
,
&
pl330
->
desc_pool
);
list_splice_tail_init
(
&
pch
->
work_list
,
&
pl330
->
desc_pool
);
list_splice_tail_init
(
&
pch
->
completed_list
,
&
pl330
->
desc_pool
);
spin_unlock_irqrestore
(
&
pch
->
lock
,
flags
);
pm_runtime_mark_last_busy
(
pl330
->
ddma
.
dev
);
pm_runtime_put_autosuspend
(
pl330
->
ddma
.
dev
);
break
;
case
DMA_SLAVE_CONFIG
:
slave_config
=
(
struct
dma_slave_config
*
)
arg
;
if
(
slave_config
->
direction
==
DMA_MEM_TO_DEV
)
{
if
(
slave_config
->
dst_addr
)
pch
->
fifo_addr
=
slave_config
->
dst_addr
;
if
(
slave_config
->
dst_addr_width
)
pch
->
burst_sz
=
__ffs
(
slave_config
->
dst_addr_width
);
if
(
slave_config
->
dst_maxburst
)
pch
->
burst_len
=
slave_config
->
dst_maxburst
;
}
else
if
(
slave_config
->
direction
==
DMA_DEV_TO_MEM
)
{
if
(
slave_config
->
src_addr
)
pch
->
fifo_addr
=
slave_config
->
src_addr
;
if
(
slave_config
->
src_addr_width
)
pch
->
burst_sz
=
__ffs
(
slave_config
->
src_addr_width
);
if
(
slave_config
->
src_maxburst
)
pch
->
burst_len
=
slave_config
->
src_maxburst
;
}
break
;
default:
dev_err
(
pch
->
dmac
->
ddma
.
dev
,
"Not supported command.
\n
"
);
return
-
ENXIO
;
list_for_each_entry
(
desc
,
&
pch
->
work_list
,
node
)
{
desc
->
status
=
FREE
;
dma_cookie_complete
(
&
desc
->
txd
);
}
list_splice_tail_init
(
&
pch
->
submitted_list
,
&
pl330
->
desc_pool
);
list_splice_tail_init
(
&
pch
->
work_list
,
&
pl330
->
desc_pool
);
list_splice_tail_init
(
&
pch
->
completed_list
,
&
pl330
->
desc_pool
);
spin_unlock_irqrestore
(
&
pch
->
lock
,
flags
);
return
0
;
}
...
...
@@ -2623,19 +2608,6 @@ static irqreturn_t pl330_irq_handler(int irq, void *data)
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES)
static
int
pl330_dma_device_slave_caps
(
struct
dma_chan
*
dchan
,
struct
dma_slave_caps
*
caps
)
{
caps
->
src_addr_widths
=
PL330_DMA_BUSWIDTHS
;
caps
->
dstn_addr_widths
=
PL330_DMA_BUSWIDTHS
;
caps
->
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
caps
->
cmd_pause
=
false
;
caps
->
cmd_terminate
=
true
;
caps
->
residue_granularity
=
DMA_RESIDUE_GRANULARITY_DESCRIPTOR
;
return
0
;
}
/*
* Runtime PM callbacks are provided by amba/bus.c driver.
*
...
...
@@ -2793,9 +2765,13 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id)
pd
->
device_prep_dma_cyclic
=
pl330_prep_dma_cyclic
;
pd
->
device_tx_status
=
pl330_tx_status
;
pd
->
device_prep_slave_sg
=
pl330_prep_slave_sg
;
pd
->
device_control
=
pl330_control
;
pd
->
device_config
=
pl330_config
;
pd
->
device_terminate_all
=
pl330_terminate_all
;
pd
->
device_issue_pending
=
pl330_issue_pending
;
pd
->
device_slave_caps
=
pl330_dma_device_slave_caps
;
pd
->
src_addr_widths
=
PL330_DMA_BUSWIDTHS
;
pd
->
dst_addr_widths
=
PL330_DMA_BUSWIDTHS
;
pd
->
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
pd
->
residue_granularity
=
DMA_RESIDUE_GRANULARITY_DESCRIPTOR
;
ret
=
dma_async_device_register
(
pd
);
if
(
ret
)
{
...
...
@@ -2847,7 +2823,7 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id)
/* Flush the channel */
if
(
pch
->
thread
)
{
pl330_
control
(
&
pch
->
chan
,
DMA_TERMINATE_ALL
,
0
);
pl330_
terminate_all
(
&
pch
->
chan
);
pl330_free_chan_resources
(
&
pch
->
chan
);
}
}
...
...
@@ -2878,7 +2854,7 @@ static int pl330_remove(struct amba_device *adev)
/* Flush the channel */
if
(
pch
->
thread
)
{
pl330_
control
(
&
pch
->
chan
,
DMA_TERMINATE_ALL
,
0
);
pl330_
terminate_all
(
&
pch
->
chan
);
pl330_free_chan_resources
(
&
pch
->
chan
);
}
}
...
...
drivers/dma/qcom_bam_dma.c
浏览文件 @
2cd6f792
...
...
@@ -530,11 +530,18 @@ static void bam_free_chan(struct dma_chan *chan)
* Sets slave configuration for channel
*
*/
static
void
bam_slave_config
(
struct
bam_chan
*
b
chan
,
struct
dma_slave_config
*
cfg
)
static
int
bam_slave_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
cfg
)
{
struct
bam_chan
*
bchan
=
to_bam_chan
(
chan
);
unsigned
long
flag
;
spin_lock_irqsave
(
&
bchan
->
vc
.
lock
,
flag
);
memcpy
(
&
bchan
->
slave
,
cfg
,
sizeof
(
*
cfg
));
bchan
->
reconfigure
=
1
;
spin_unlock_irqrestore
(
&
bchan
->
vc
.
lock
,
flag
);
return
0
;
}
/**
...
...
@@ -627,8 +634,9 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
* No callbacks are done
*
*/
static
void
bam_dma_terminate_all
(
struct
bam_chan
*
b
chan
)
static
int
bam_dma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
bam_chan
*
bchan
=
to_bam_chan
(
chan
);
unsigned
long
flag
;
LIST_HEAD
(
head
);
...
...
@@ -643,56 +651,46 @@ static void bam_dma_terminate_all(struct bam_chan *bchan)
spin_unlock_irqrestore
(
&
bchan
->
vc
.
lock
,
flag
);
vchan_dma_desc_free_list
(
&
bchan
->
vc
,
&
head
);
return
0
;
}
/**
* bam_
control - DMA device contro
l
* bam_
pause - Pause DMA channe
l
* @chan: dma channel
* @cmd: control cmd
* @arg: cmd argument
*
* Perform DMA control command
*/
static
int
bam_pause
(
struct
dma_chan
*
chan
)
{
struct
bam_chan
*
bchan
=
to_bam_chan
(
chan
);
struct
bam_device
*
bdev
=
bchan
->
bdev
;
unsigned
long
flag
;
spin_lock_irqsave
(
&
bchan
->
vc
.
lock
,
flag
);
writel_relaxed
(
1
,
bam_addr
(
bdev
,
bchan
->
id
,
BAM_P_HALT
));
bchan
->
paused
=
1
;
spin_unlock_irqrestore
(
&
bchan
->
vc
.
lock
,
flag
);
return
0
;
}
/**
* bam_resume - Resume DMA channel operations
* @chan: dma channel
*
*/
static
int
bam_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
bam_resume
(
struct
dma_chan
*
chan
)
{
struct
bam_chan
*
bchan
=
to_bam_chan
(
chan
);
struct
bam_device
*
bdev
=
bchan
->
bdev
;
int
ret
=
0
;
unsigned
long
flag
;
switch
(
cmd
)
{
case
DMA_PAUSE
:
spin_lock_irqsave
(
&
bchan
->
vc
.
lock
,
flag
);
writel_relaxed
(
1
,
bam_addr
(
bdev
,
bchan
->
id
,
BAM_P_HALT
));
bchan
->
paused
=
1
;
spin_unlock_irqrestore
(
&
bchan
->
vc
.
lock
,
flag
);
break
;
case
DMA_RESUME
:
spin_lock_irqsave
(
&
bchan
->
vc
.
lock
,
flag
);
writel_relaxed
(
0
,
bam_addr
(
bdev
,
bchan
->
id
,
BAM_P_HALT
));
bchan
->
paused
=
0
;
spin_unlock_irqrestore
(
&
bchan
->
vc
.
lock
,
flag
);
break
;
case
DMA_TERMINATE_ALL
:
bam_dma_terminate_all
(
bchan
);
break
;
case
DMA_SLAVE_CONFIG
:
spin_lock_irqsave
(
&
bchan
->
vc
.
lock
,
flag
);
bam_slave_config
(
bchan
,
(
struct
dma_slave_config
*
)
arg
);
spin_unlock_irqrestore
(
&
bchan
->
vc
.
lock
,
flag
);
break
;
default:
ret
=
-
ENXIO
;
break
;
}
spin_lock_irqsave
(
&
bchan
->
vc
.
lock
,
flag
);
writel_relaxed
(
0
,
bam_addr
(
bdev
,
bchan
->
id
,
BAM_P_HALT
));
bchan
->
paused
=
0
;
spin_unlock_irqrestore
(
&
bchan
->
vc
.
lock
,
flag
);
return
ret
;
return
0
;
}
/**
...
...
@@ -1148,7 +1146,10 @@ static int bam_dma_probe(struct platform_device *pdev)
bdev
->
common
.
device_alloc_chan_resources
=
bam_alloc_chan
;
bdev
->
common
.
device_free_chan_resources
=
bam_free_chan
;
bdev
->
common
.
device_prep_slave_sg
=
bam_prep_slave_sg
;
bdev
->
common
.
device_control
=
bam_control
;
bdev
->
common
.
device_config
=
bam_slave_config
;
bdev
->
common
.
device_pause
=
bam_pause
;
bdev
->
common
.
device_resume
=
bam_resume
;
bdev
->
common
.
device_terminate_all
=
bam_dma_terminate_all
;
bdev
->
common
.
device_issue_pending
=
bam_issue_pending
;
bdev
->
common
.
device_tx_status
=
bam_tx_status
;
bdev
->
common
.
dev
=
bdev
->
dev
;
...
...
@@ -1187,7 +1188,7 @@ static int bam_dma_remove(struct platform_device *pdev)
devm_free_irq
(
bdev
->
dev
,
bdev
->
irq
,
bdev
);
for
(
i
=
0
;
i
<
bdev
->
num_channels
;
i
++
)
{
bam_dma_terminate_all
(
&
bdev
->
channels
[
i
]);
bam_dma_terminate_all
(
&
bdev
->
channels
[
i
]
.
vc
.
chan
);
tasklet_kill
(
&
bdev
->
channels
[
i
].
vc
.
task
);
dma_free_writecombine
(
bdev
->
dev
,
BAM_DESC_FIFO_SIZE
,
...
...
drivers/dma/s3c24xx-dma.c
浏览文件 @
2cd6f792
...
...
@@ -384,20 +384,30 @@ static u32 s3c24xx_dma_getbytes_chan(struct s3c24xx_dma_chan *s3cchan)
return
tc
*
txd
->
width
;
}
static
int
s3c24xx_dma_set_runtime_config
(
struct
s3c24xx_dma_chan
*
s3c
chan
,
static
int
s3c24xx_dma_set_runtime_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
config
)
{
if
(
!
s3cchan
->
slave
)
return
-
EINVAL
;
struct
s3c24xx_dma_chan
*
s3cchan
=
to_s3c24xx_dma_chan
(
chan
);
unsigned
long
flags
;
int
ret
=
0
;
/* Reject definitely invalid configurations */
if
(
config
->
src_addr_width
==
DMA_SLAVE_BUSWIDTH_8_BYTES
||
config
->
dst_addr_width
==
DMA_SLAVE_BUSWIDTH_8_BYTES
)
return
-
EINVAL
;
spin_lock_irqsave
(
&
s3cchan
->
vc
.
lock
,
flags
);
if
(
!
s3cchan
->
slave
)
{
ret
=
-
EINVAL
;
goto
out
;
}
s3cchan
->
cfg
=
*
config
;
return
0
;
out:
spin_unlock_irqrestore
(
&
s3cchan
->
vc
.
lock
,
flags
);
return
ret
;
}
/*
...
...
@@ -703,53 +713,38 @@ static irqreturn_t s3c24xx_dma_irq(int irq, void *data)
* The DMA ENGINE API
*/
static
int
s3c24xx_dma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
s3c24xx_dma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
s3c24xx_dma_chan
*
s3cchan
=
to_s3c24xx_dma_chan
(
chan
);
struct
s3c24xx_dma_engine
*
s3cdma
=
s3cchan
->
host
;
unsigned
long
flags
;
int
ret
=
0
;
spin_lock_irqsave
(
&
s3cchan
->
vc
.
lock
,
flags
);
switch
(
cmd
)
{
case
DMA_SLAVE_CONFIG
:
ret
=
s3c24xx_dma_set_runtime_config
(
s3cchan
,
(
struct
dma_slave_config
*
)
arg
);
break
;
case
DMA_TERMINATE_ALL
:
if
(
!
s3cchan
->
phy
&&
!
s3cchan
->
at
)
{
dev_err
(
&
s3cdma
->
pdev
->
dev
,
"trying to terminate already stopped channel %d
\n
"
,
s3cchan
->
id
);
ret
=
-
EINVAL
;
break
;
}
s3cchan
->
state
=
S3C24XX_DMA_CHAN_IDLE
;
if
(
!
s3cchan
->
phy
&&
!
s3cchan
->
at
)
{
dev_err
(
&
s3cdma
->
pdev
->
dev
,
"trying to terminate already stopped channel %d
\n
"
,
s3cchan
->
id
);
return
-
EINVAL
;
}
/* Mark physical channel as free */
if
(
s3cchan
->
phy
)
s3c24xx_dma_phy_free
(
s3cchan
);
s3cchan
->
state
=
S3C24XX_DMA_CHAN_IDLE
;
/* Dequeue current job */
if
(
s3cchan
->
at
)
{
s3c24xx_dma_desc_free
(
&
s3cchan
->
at
->
vd
);
s3cchan
->
at
=
NULL
;
}
/* Mark physical channel as free */
if
(
s3cchan
->
phy
)
s3c24xx_dma_phy_free
(
s3cchan
);
/* Dequeue jobs not yet fired as well */
s3c24xx_dma_free_txd_list
(
s3cdma
,
s3cchan
);
break
;
default:
/* Unknown command */
ret
=
-
ENXIO
;
break
;
/* Dequeue current job */
if
(
s3cchan
->
at
)
{
s3c24xx_dma_desc_free
(
&
s3cchan
->
at
->
vd
);
s3cchan
->
at
=
NULL
;
}
/* Dequeue jobs not yet fired as well */
s3c24xx_dma_free_txd_list
(
s3cdma
,
s3cchan
);
spin_unlock_irqrestore
(
&
s3cchan
->
vc
.
lock
,
flags
);
return
ret
;
return
0
;
}
static
int
s3c24xx_dma_alloc_chan_resources
(
struct
dma_chan
*
chan
)
...
...
@@ -1300,7 +1295,8 @@ static int s3c24xx_dma_probe(struct platform_device *pdev)
s3cdma
->
memcpy
.
device_prep_dma_memcpy
=
s3c24xx_dma_prep_memcpy
;
s3cdma
->
memcpy
.
device_tx_status
=
s3c24xx_dma_tx_status
;
s3cdma
->
memcpy
.
device_issue_pending
=
s3c24xx_dma_issue_pending
;
s3cdma
->
memcpy
.
device_control
=
s3c24xx_dma_control
;
s3cdma
->
memcpy
.
device_config
=
s3c24xx_dma_set_runtime_config
;
s3cdma
->
memcpy
.
device_terminate_all
=
s3c24xx_dma_terminate_all
;
/* Initialize slave engine for SoC internal dedicated peripherals */
dma_cap_set
(
DMA_SLAVE
,
s3cdma
->
slave
.
cap_mask
);
...
...
@@ -1315,7 +1311,8 @@ static int s3c24xx_dma_probe(struct platform_device *pdev)
s3cdma
->
slave
.
device_issue_pending
=
s3c24xx_dma_issue_pending
;
s3cdma
->
slave
.
device_prep_slave_sg
=
s3c24xx_dma_prep_slave_sg
;
s3cdma
->
slave
.
device_prep_dma_cyclic
=
s3c24xx_dma_prep_dma_cyclic
;
s3cdma
->
slave
.
device_control
=
s3c24xx_dma_control
;
s3cdma
->
slave
.
device_config
=
s3c24xx_dma_set_runtime_config
;
s3cdma
->
slave
.
device_terminate_all
=
s3c24xx_dma_terminate_all
;
/* Register as many memcpy channels as there are physical channels */
ret
=
s3c24xx_dma_init_virtual_channels
(
s3cdma
,
&
s3cdma
->
memcpy
,
...
...
drivers/dma/sa11x0-dma.c
浏览文件 @
2cd6f792
...
...
@@ -669,8 +669,10 @@ static struct dma_async_tx_descriptor *sa11x0_dma_prep_dma_cyclic(
return
vchan_tx_prep
(
&
c
->
vc
,
&
txd
->
vd
,
DMA_PREP_INTERRUPT
|
DMA_CTRL_ACK
);
}
static
int
sa11x0_dma_slave_config
(
struct
sa11x0_dma_chan
*
c
,
struct
dma_slave_config
*
cfg
)
static
int
sa11x0_dma_device_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
cfg
)
{
struct
sa11x0_dma_chan
*
c
=
to_sa11x0_dma_chan
(
chan
);
u32
ddar
=
c
->
ddar
&
((
0xf
<<
4
)
|
DDAR_RW
);
dma_addr_t
addr
;
enum
dma_slave_buswidth
width
;
...
...
@@ -704,99 +706,101 @@ static int sa11x0_dma_slave_config(struct sa11x0_dma_chan *c, struct dma_slave_c
return
0
;
}
static
int
sa11x0_dma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
sa11x0_dma_device_pause
(
struct
dma_chan
*
chan
)
{
struct
sa11x0_dma_chan
*
c
=
to_sa11x0_dma_chan
(
chan
);
struct
sa11x0_dma_dev
*
d
=
to_sa11x0_dma
(
chan
->
device
);
struct
sa11x0_dma_phy
*
p
;
LIST_HEAD
(
head
);
unsigned
long
flags
;
int
ret
;
switch
(
cmd
)
{
case
DMA_SLAVE_CONFIG
:
return
sa11x0_dma_slave_config
(
c
,
(
struct
dma_slave_config
*
)
arg
);
case
DMA_TERMINATE_ALL
:
dev_dbg
(
d
->
slave
.
dev
,
"vchan %p: terminate all
\n
"
,
&
c
->
vc
);
/* Clear the tx descriptor lists */
spin_lock_irqsave
(
&
c
->
vc
.
lock
,
flags
);
vchan_get_all_descriptors
(
&
c
->
vc
,
&
head
);
dev_dbg
(
d
->
slave
.
dev
,
"vchan %p: pause
\n
"
,
&
c
->
vc
);
spin_lock_irqsave
(
&
c
->
vc
.
lock
,
flags
);
if
(
c
->
status
==
DMA_IN_PROGRESS
)
{
c
->
status
=
DMA_PAUSED
;
p
=
c
->
phy
;
if
(
p
)
{
dev_dbg
(
d
->
slave
.
dev
,
"pchan %u: terminating
\n
"
,
p
->
num
);
/* vchan is assigned to a pchan - stop the channel */
writel
(
DCSR_RUN
|
DCSR_IE
|
DCSR_STRTA
|
DCSR_DONEA
|
DCSR_STRTB
|
DCSR_DONEB
,
p
->
base
+
DMA_DCSR_C
);
if
(
p
->
txd_load
)
{
if
(
p
->
txd_load
!=
p
->
txd_done
)
list_add_tail
(
&
p
->
txd_load
->
vd
.
node
,
&
head
);
p
->
txd_load
=
NULL
;
}
if
(
p
->
txd_done
)
{
list_add_tail
(
&
p
->
txd_done
->
vd
.
node
,
&
head
);
p
->
txd_done
=
NULL
;
}
c
->
phy
=
NULL
;
writel
(
DCSR_RUN
|
DCSR_IE
,
p
->
base
+
DMA_DCSR_C
);
}
else
{
spin_lock
(
&
d
->
lock
);
p
->
vchan
=
NULL
;
list_del_init
(
&
c
->
node
)
;
spin_unlock
(
&
d
->
lock
);
tasklet_schedule
(
&
d
->
task
);
}
spin_unlock_irqrestore
(
&
c
->
vc
.
lock
,
flags
);
vchan_dma_desc_free_list
(
&
c
->
vc
,
&
head
);
ret
=
0
;
break
;
}
spin_unlock_irqrestore
(
&
c
->
vc
.
lock
,
flags
);
case
DMA_PAUSE
:
dev_dbg
(
d
->
slave
.
dev
,
"vchan %p: pause
\n
"
,
&
c
->
vc
);
spin_lock_irqsave
(
&
c
->
vc
.
lock
,
flags
);
if
(
c
->
status
==
DMA_IN_PROGRESS
)
{
c
->
status
=
DMA_PAUSED
;
return
0
;
}
p
=
c
->
phy
;
if
(
p
)
{
writel
(
DCSR_RUN
|
DCSR_IE
,
p
->
base
+
DMA_DCSR_C
);
}
else
{
spin_lock
(
&
d
->
lock
);
list_del_init
(
&
c
->
node
);
spin_unlock
(
&
d
->
lock
);
}
}
spin_unlock_irqrestore
(
&
c
->
vc
.
lock
,
flags
);
ret
=
0
;
break
;
static
int
sa11x0_dma_device_resume
(
struct
dma_chan
*
chan
)
{
struct
sa11x0_dma_chan
*
c
=
to_sa11x0_dma_chan
(
chan
);
struct
sa11x0_dma_dev
*
d
=
to_sa11x0_dma
(
chan
->
device
);
struct
sa11x0_dma_phy
*
p
;
LIST_HEAD
(
head
);
unsigned
long
flags
;
case
DMA_RESUME
:
dev_dbg
(
d
->
slave
.
dev
,
"vchan %p: resume
\n
"
,
&
c
->
vc
);
spin_lock_irqsave
(
&
c
->
vc
.
lock
,
flags
);
if
(
c
->
status
==
DMA_PAUSED
)
{
c
->
status
=
DMA_IN_PROGRESS
;
p
=
c
->
phy
;
if
(
p
)
{
writel
(
DCSR_RUN
|
DCSR_IE
,
p
->
base
+
DMA_DCSR_S
);
}
else
if
(
!
list_empty
(
&
c
->
vc
.
desc_issued
))
{
spin_lock
(
&
d
->
lock
);
list_add_tail
(
&
c
->
node
,
&
d
->
chan_pending
);
spin_unlock
(
&
d
->
lock
);
}
dev_dbg
(
d
->
slave
.
dev
,
"vchan %p: resume
\n
"
,
&
c
->
vc
);
spin_lock_irqsave
(
&
c
->
vc
.
lock
,
flags
);
if
(
c
->
status
==
DMA_PAUSED
)
{
c
->
status
=
DMA_IN_PROGRESS
;
p
=
c
->
phy
;
if
(
p
)
{
writel
(
DCSR_RUN
|
DCSR_IE
,
p
->
base
+
DMA_DCSR_S
);
}
else
if
(
!
list_empty
(
&
c
->
vc
.
desc_issued
))
{
spin_lock
(
&
d
->
lock
);
list_add_tail
(
&
c
->
node
,
&
d
->
chan_pending
);
spin_unlock
(
&
d
->
lock
);
}
spin_unlock_irqrestore
(
&
c
->
vc
.
lock
,
flags
);
ret
=
0
;
break
;
}
spin_unlock_irqrestore
(
&
c
->
vc
.
lock
,
flags
);
default:
ret
=
-
ENXIO
;
break
;
return
0
;
}
static
int
sa11x0_dma_device_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
sa11x0_dma_chan
*
c
=
to_sa11x0_dma_chan
(
chan
);
struct
sa11x0_dma_dev
*
d
=
to_sa11x0_dma
(
chan
->
device
);
struct
sa11x0_dma_phy
*
p
;
LIST_HEAD
(
head
);
unsigned
long
flags
;
dev_dbg
(
d
->
slave
.
dev
,
"vchan %p: terminate all
\n
"
,
&
c
->
vc
);
/* Clear the tx descriptor lists */
spin_lock_irqsave
(
&
c
->
vc
.
lock
,
flags
);
vchan_get_all_descriptors
(
&
c
->
vc
,
&
head
);
p
=
c
->
phy
;
if
(
p
)
{
dev_dbg
(
d
->
slave
.
dev
,
"pchan %u: terminating
\n
"
,
p
->
num
);
/* vchan is assigned to a pchan - stop the channel */
writel
(
DCSR_RUN
|
DCSR_IE
|
DCSR_STRTA
|
DCSR_DONEA
|
DCSR_STRTB
|
DCSR_DONEB
,
p
->
base
+
DMA_DCSR_C
);
if
(
p
->
txd_load
)
{
if
(
p
->
txd_load
!=
p
->
txd_done
)
list_add_tail
(
&
p
->
txd_load
->
vd
.
node
,
&
head
);
p
->
txd_load
=
NULL
;
}
if
(
p
->
txd_done
)
{
list_add_tail
(
&
p
->
txd_done
->
vd
.
node
,
&
head
);
p
->
txd_done
=
NULL
;
}
c
->
phy
=
NULL
;
spin_lock
(
&
d
->
lock
);
p
->
vchan
=
NULL
;
spin_unlock
(
&
d
->
lock
);
tasklet_schedule
(
&
d
->
task
);
}
spin_unlock_irqrestore
(
&
c
->
vc
.
lock
,
flags
);
vchan_dma_desc_free_list
(
&
c
->
vc
,
&
head
);
return
ret
;
return
0
;
}
struct
sa11x0_dma_channel_desc
{
...
...
@@ -833,7 +837,10 @@ static int sa11x0_dma_init_dmadev(struct dma_device *dmadev,
dmadev
->
dev
=
dev
;
dmadev
->
device_alloc_chan_resources
=
sa11x0_dma_alloc_chan_resources
;
dmadev
->
device_free_chan_resources
=
sa11x0_dma_free_chan_resources
;
dmadev
->
device_control
=
sa11x0_dma_control
;
dmadev
->
device_config
=
sa11x0_dma_device_config
;
dmadev
->
device_pause
=
sa11x0_dma_device_pause
;
dmadev
->
device_resume
=
sa11x0_dma_device_resume
;
dmadev
->
device_terminate_all
=
sa11x0_dma_device_terminate_all
;
dmadev
->
device_tx_status
=
sa11x0_dma_tx_status
;
dmadev
->
device_issue_pending
=
sa11x0_dma_issue_pending
;
...
...
drivers/dma/sh/rcar-hpbdma.c
浏览文件 @
2cd6f792
...
...
@@ -534,6 +534,8 @@ static int hpb_dmae_chan_probe(struct hpb_dmae_device *hpbdev, int id)
static
int
hpb_dmae_probe
(
struct
platform_device
*
pdev
)
{
const
enum
dma_slave_buswidth
widths
=
DMA_SLAVE_BUSWIDTH_1_BYTE
|
DMA_SLAVE_BUSWIDTH_2_BYTES
|
DMA_SLAVE_BUSWIDTH_4_BYTES
;
struct
hpb_dmae_pdata
*
pdata
=
pdev
->
dev
.
platform_data
;
struct
hpb_dmae_device
*
hpbdev
;
struct
dma_device
*
dma_dev
;
...
...
@@ -595,6 +597,10 @@ static int hpb_dmae_probe(struct platform_device *pdev)
dma_cap_set
(
DMA_MEMCPY
,
dma_dev
->
cap_mask
);
dma_cap_set
(
DMA_SLAVE
,
dma_dev
->
cap_mask
);
dma_dev
->
src_addr_widths
=
widths
;
dma_dev
->
dst_addr_widths
=
widths
;
dma_dev
->
directions
=
BIT
(
DMA_MEM_TO_DEV
)
|
BIT
(
DMA_DEV_TO_MEM
);
dma_dev
->
residue_granularity
=
DMA_RESIDUE_GRANULARITY_DESCRIPTOR
;
hpbdev
->
shdma_dev
.
ops
=
&
hpb_dmae_ops
;
hpbdev
->
shdma_dev
.
desc_size
=
sizeof
(
struct
hpb_desc
);
...
...
drivers/dma/sh/shdma-base.c
浏览文件 @
2cd6f792
...
...
@@ -729,57 +729,50 @@ static struct dma_async_tx_descriptor *shdma_prep_dma_cyclic(
return
desc
;
}
static
int
shdma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
shdma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
shdma_chan
*
schan
=
to_shdma_chan
(
chan
);
struct
shdma_dev
*
sdev
=
to_shdma_dev
(
chan
->
device
);
const
struct
shdma_ops
*
ops
=
sdev
->
ops
;
struct
dma_slave_config
*
config
;
unsigned
long
flags
;
int
ret
;
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
spin_lock_irqsave
(
&
schan
->
chan_lock
,
flags
);
ops
->
halt_channel
(
schan
);
spin_lock_irqsave
(
&
schan
->
chan_lock
,
flags
);
ops
->
halt_channel
(
schan
);
if
(
ops
->
get_partial
&&
!
list_empty
(
&
schan
->
ld_queue
))
{
/* Record partial transfer */
struct
shdma_desc
*
desc
=
list_first_entry
(
&
schan
->
ld_queue
,
struct
shdma_desc
,
node
);
desc
->
partial
=
ops
->
get_partial
(
schan
,
desc
);
}
if
(
ops
->
get_partial
&&
!
list_empty
(
&
schan
->
ld_queue
))
{
/* Record partial transfer */
struct
shdma_desc
*
desc
=
list_first_entry
(
&
schan
->
ld_queue
,
struct
shdma_desc
,
node
);
desc
->
partial
=
ops
->
get_partial
(
schan
,
desc
);
}
spin_unlock_irqrestore
(
&
schan
->
chan_lock
,
flags
);
spin_unlock_irqrestore
(
&
schan
->
chan_lock
,
flags
);
shdma_chan_ld_cleanup
(
schan
,
true
);
break
;
case
DMA_SLAVE_CONFIG
:
/*
* So far only .slave_id is used, but the slave drivers are
* encouraged to also set a transfer direction and an address.
*/
if
(
!
arg
)
return
-
EINVAL
;
/*
* We could lock this, but you shouldn't be configuring the
* channel, while using it...
*/
config
=
(
struct
dma_slave_config
*
)
arg
;
ret
=
shdma_setup_slave
(
schan
,
config
->
slave_id
,
config
->
direction
==
DMA_DEV_TO_MEM
?
config
->
src_addr
:
config
->
dst_addr
);
if
(
ret
<
0
)
return
ret
;
break
;
default:
return
-
ENXIO
;
}
shdma_chan_ld_cleanup
(
schan
,
true
);
return
0
;
}
static
int
shdma_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
config
)
{
struct
shdma_chan
*
schan
=
to_shdma_chan
(
chan
);
/*
* So far only .slave_id is used, but the slave drivers are
* encouraged to also set a transfer direction and an address.
*/
if
(
!
config
)
return
-
EINVAL
;
/*
* We could lock this, but you shouldn't be configuring the
* channel, while using it...
*/
return
shdma_setup_slave
(
schan
,
config
->
slave_id
,
config
->
direction
==
DMA_DEV_TO_MEM
?
config
->
src_addr
:
config
->
dst_addr
);
}
static
void
shdma_issue_pending
(
struct
dma_chan
*
chan
)
{
struct
shdma_chan
*
schan
=
to_shdma_chan
(
chan
);
...
...
@@ -1002,7 +995,8 @@ int shdma_init(struct device *dev, struct shdma_dev *sdev,
/* Compulsory for DMA_SLAVE fields */
dma_dev
->
device_prep_slave_sg
=
shdma_prep_slave_sg
;
dma_dev
->
device_prep_dma_cyclic
=
shdma_prep_dma_cyclic
;
dma_dev
->
device_control
=
shdma_control
;
dma_dev
->
device_config
=
shdma_config
;
dma_dev
->
device_terminate_all
=
shdma_terminate_all
;
dma_dev
->
dev
=
dev
;
...
...
drivers/dma/sh/shdmac.c
浏览文件 @
2cd6f792
...
...
@@ -684,6 +684,10 @@ MODULE_DEVICE_TABLE(of, sh_dmae_of_match);
static
int
sh_dmae_probe
(
struct
platform_device
*
pdev
)
{
const
enum
dma_slave_buswidth
widths
=
DMA_SLAVE_BUSWIDTH_1_BYTE
|
DMA_SLAVE_BUSWIDTH_2_BYTES
|
DMA_SLAVE_BUSWIDTH_4_BYTES
|
DMA_SLAVE_BUSWIDTH_8_BYTES
|
DMA_SLAVE_BUSWIDTH_16_BYTES
|
DMA_SLAVE_BUSWIDTH_32_BYTES
;
const
struct
sh_dmae_pdata
*
pdata
;
unsigned
long
chan_flag
[
SH_DMAE_MAX_CHANNELS
]
=
{};
int
chan_irq
[
SH_DMAE_MAX_CHANNELS
];
...
...
@@ -746,6 +750,11 @@ static int sh_dmae_probe(struct platform_device *pdev)
return
PTR_ERR
(
shdev
->
dmars
);
}
dma_dev
->
src_addr_widths
=
widths
;
dma_dev
->
dst_addr_widths
=
widths
;
dma_dev
->
directions
=
BIT
(
DMA_MEM_TO_DEV
)
|
BIT
(
DMA_DEV_TO_MEM
);
dma_dev
->
residue_granularity
=
DMA_RESIDUE_GRANULARITY_DESCRIPTOR
;
if
(
!
pdata
->
slave_only
)
dma_cap_set
(
DMA_MEMCPY
,
dma_dev
->
cap_mask
);
if
(
pdata
->
slave
&&
pdata
->
slave_num
)
...
...
drivers/dma/sirf-dma.c
浏览文件 @
2cd6f792
...
...
@@ -281,9 +281,10 @@ static dma_cookie_t sirfsoc_dma_tx_submit(struct dma_async_tx_descriptor *txd)
return
cookie
;
}
static
int
sirfsoc_dma_slave_config
(
struct
sirfsoc_dma_chan
*
s
chan
,
struct
dma_slave_config
*
config
)
static
int
sirfsoc_dma_slave_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
config
)
{
struct
sirfsoc_dma_chan
*
schan
=
dma_chan_to_sirfsoc_dma_chan
(
chan
);
unsigned
long
flags
;
if
((
config
->
src_addr_width
!=
DMA_SLAVE_BUSWIDTH_4_BYTES
)
||
...
...
@@ -297,8 +298,9 @@ static int sirfsoc_dma_slave_config(struct sirfsoc_dma_chan *schan,
return
0
;
}
static
int
sirfsoc_dma_terminate_all
(
struct
sirfsoc_dma_chan
*
s
chan
)
static
int
sirfsoc_dma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
sirfsoc_dma_chan
*
schan
=
dma_chan_to_sirfsoc_dma_chan
(
chan
);
struct
sirfsoc_dma
*
sdma
=
dma_chan_to_sirfsoc_dma
(
&
schan
->
chan
);
int
cid
=
schan
->
chan
.
chan_id
;
unsigned
long
flags
;
...
...
@@ -327,8 +329,9 @@ static int sirfsoc_dma_terminate_all(struct sirfsoc_dma_chan *schan)
return
0
;
}
static
int
sirfsoc_dma_pause_chan
(
struct
sirfsoc_dma_chan
*
s
chan
)
static
int
sirfsoc_dma_pause_chan
(
struct
dma_chan
*
chan
)
{
struct
sirfsoc_dma_chan
*
schan
=
dma_chan_to_sirfsoc_dma_chan
(
chan
);
struct
sirfsoc_dma
*
sdma
=
dma_chan_to_sirfsoc_dma
(
&
schan
->
chan
);
int
cid
=
schan
->
chan
.
chan_id
;
unsigned
long
flags
;
...
...
@@ -348,8 +351,9 @@ static int sirfsoc_dma_pause_chan(struct sirfsoc_dma_chan *schan)
return
0
;
}
static
int
sirfsoc_dma_resume_chan
(
struct
sirfsoc_dma_chan
*
s
chan
)
static
int
sirfsoc_dma_resume_chan
(
struct
dma_chan
*
chan
)
{
struct
sirfsoc_dma_chan
*
schan
=
dma_chan_to_sirfsoc_dma_chan
(
chan
);
struct
sirfsoc_dma
*
sdma
=
dma_chan_to_sirfsoc_dma
(
&
schan
->
chan
);
int
cid
=
schan
->
chan
.
chan_id
;
unsigned
long
flags
;
...
...
@@ -369,30 +373,6 @@ static int sirfsoc_dma_resume_chan(struct sirfsoc_dma_chan *schan)
return
0
;
}
static
int
sirfsoc_dma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
{
struct
dma_slave_config
*
config
;
struct
sirfsoc_dma_chan
*
schan
=
dma_chan_to_sirfsoc_dma_chan
(
chan
);
switch
(
cmd
)
{
case
DMA_PAUSE
:
return
sirfsoc_dma_pause_chan
(
schan
);
case
DMA_RESUME
:
return
sirfsoc_dma_resume_chan
(
schan
);
case
DMA_TERMINATE_ALL
:
return
sirfsoc_dma_terminate_all
(
schan
);
case
DMA_SLAVE_CONFIG
:
config
=
(
struct
dma_slave_config
*
)
arg
;
return
sirfsoc_dma_slave_config
(
schan
,
config
);
default:
break
;
}
return
-
ENOSYS
;
}
/* Alloc channel resources */
static
int
sirfsoc_dma_alloc_chan_resources
(
struct
dma_chan
*
chan
)
{
...
...
@@ -648,18 +628,6 @@ EXPORT_SYMBOL(sirfsoc_dma_filter_id);
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES))
static
int
sirfsoc_dma_device_slave_caps
(
struct
dma_chan
*
dchan
,
struct
dma_slave_caps
*
caps
)
{
caps
->
src_addr_widths
=
SIRFSOC_DMA_BUSWIDTHS
;
caps
->
dstn_addr_widths
=
SIRFSOC_DMA_BUSWIDTHS
;
caps
->
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
caps
->
cmd_pause
=
true
;
caps
->
cmd_terminate
=
true
;
return
0
;
}
static
struct
dma_chan
*
of_dma_sirfsoc_xlate
(
struct
of_phandle_args
*
dma_spec
,
struct
of_dma
*
ofdma
)
{
...
...
@@ -739,11 +707,16 @@ static int sirfsoc_dma_probe(struct platform_device *op)
dma
->
device_alloc_chan_resources
=
sirfsoc_dma_alloc_chan_resources
;
dma
->
device_free_chan_resources
=
sirfsoc_dma_free_chan_resources
;
dma
->
device_issue_pending
=
sirfsoc_dma_issue_pending
;
dma
->
device_control
=
sirfsoc_dma_control
;
dma
->
device_config
=
sirfsoc_dma_slave_config
;
dma
->
device_pause
=
sirfsoc_dma_pause_chan
;
dma
->
device_resume
=
sirfsoc_dma_resume_chan
;
dma
->
device_terminate_all
=
sirfsoc_dma_terminate_all
;
dma
->
device_tx_status
=
sirfsoc_dma_tx_status
;
dma
->
device_prep_interleaved_dma
=
sirfsoc_dma_prep_interleaved
;
dma
->
device_prep_dma_cyclic
=
sirfsoc_dma_prep_cyclic
;
dma
->
device_slave_caps
=
sirfsoc_dma_device_slave_caps
;
dma
->
src_addr_widths
=
SIRFSOC_DMA_BUSWIDTHS
;
dma
->
dst_addr_widths
=
SIRFSOC_DMA_BUSWIDTHS
;
dma
->
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
INIT_LIST_HEAD
(
&
dma
->
channels
);
dma_cap_set
(
DMA_SLAVE
,
dma
->
cap_mask
);
...
...
drivers/dma/ste_dma40.c
浏览文件 @
2cd6f792
...
...
@@ -1429,11 +1429,17 @@ static bool d40_tx_is_linked(struct d40_chan *d40c)
return
is_link
;
}
static
int
d40_pause
(
struct
d
40_chan
*
d40c
)
static
int
d40_pause
(
struct
d
ma_chan
*
chan
)
{
struct
d40_chan
*
d40c
=
container_of
(
chan
,
struct
d40_chan
,
chan
);
int
res
=
0
;
unsigned
long
flags
;
if
(
d40c
->
phy_chan
==
NULL
)
{
chan_err
(
d40c
,
"Channel is not allocated!
\n
"
);
return
-
EINVAL
;
}
if
(
!
d40c
->
busy
)
return
0
;
...
...
@@ -1448,11 +1454,17 @@ static int d40_pause(struct d40_chan *d40c)
return
res
;
}
static
int
d40_resume
(
struct
d
40_chan
*
d40c
)
static
int
d40_resume
(
struct
d
ma_chan
*
chan
)
{
struct
d40_chan
*
d40c
=
container_of
(
chan
,
struct
d40_chan
,
chan
);
int
res
=
0
;
unsigned
long
flags
;
if
(
d40c
->
phy_chan
==
NULL
)
{
chan_err
(
d40c
,
"Channel is not allocated!
\n
"
);
return
-
EINVAL
;
}
if
(
!
d40c
->
busy
)
return
0
;
...
...
@@ -2604,12 +2616,17 @@ static void d40_issue_pending(struct dma_chan *chan)
spin_unlock_irqrestore
(
&
d40c
->
lock
,
flags
);
}
static
void
d40_terminate_all
(
struct
dma_chan
*
chan
)
static
int
d40_terminate_all
(
struct
dma_chan
*
chan
)
{
unsigned
long
flags
;
struct
d40_chan
*
d40c
=
container_of
(
chan
,
struct
d40_chan
,
chan
);
int
ret
;
if
(
d40c
->
phy_chan
==
NULL
)
{
chan_err
(
d40c
,
"Channel is not allocated!
\n
"
);
return
-
EINVAL
;
}
spin_lock_irqsave
(
&
d40c
->
lock
,
flags
);
pm_runtime_get_sync
(
d40c
->
base
->
dev
);
...
...
@@ -2627,6 +2644,7 @@ static void d40_terminate_all(struct dma_chan *chan)
d40c
->
busy
=
false
;
spin_unlock_irqrestore
(
&
d40c
->
lock
,
flags
);
return
0
;
}
static
int
...
...
@@ -2673,6 +2691,11 @@ static int d40_set_runtime_config(struct dma_chan *chan,
u32
src_maxburst
,
dst_maxburst
;
int
ret
;
if
(
d40c
->
phy_chan
==
NULL
)
{
chan_err
(
d40c
,
"Channel is not allocated!
\n
"
);
return
-
EINVAL
;
}
src_addr_width
=
config
->
src_addr_width
;
src_maxburst
=
config
->
src_maxburst
;
dst_addr_width
=
config
->
dst_addr_width
;
...
...
@@ -2781,35 +2804,6 @@ static int d40_set_runtime_config(struct dma_chan *chan,
return
0
;
}
static
int
d40_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
{
struct
d40_chan
*
d40c
=
container_of
(
chan
,
struct
d40_chan
,
chan
);
if
(
d40c
->
phy_chan
==
NULL
)
{
chan_err
(
d40c
,
"Channel is not allocated!
\n
"
);
return
-
EINVAL
;
}
switch
(
cmd
)
{
case
DMA_TERMINATE_ALL
:
d40_terminate_all
(
chan
);
return
0
;
case
DMA_PAUSE
:
return
d40_pause
(
d40c
);
case
DMA_RESUME
:
return
d40_resume
(
d40c
);
case
DMA_SLAVE_CONFIG
:
return
d40_set_runtime_config
(
chan
,
(
struct
dma_slave_config
*
)
arg
);
default:
break
;
}
/* Other commands are unimplemented */
return
-
ENXIO
;
}
/* Initialization functions */
static
void
__init
d40_chan_init
(
struct
d40_base
*
base
,
struct
dma_device
*
dma
,
...
...
@@ -2870,7 +2864,10 @@ static void d40_ops_init(struct d40_base *base, struct dma_device *dev)
dev
->
device_free_chan_resources
=
d40_free_chan_resources
;
dev
->
device_issue_pending
=
d40_issue_pending
;
dev
->
device_tx_status
=
d40_tx_status
;
dev
->
device_control
=
d40_control
;
dev
->
device_config
=
d40_set_runtime_config
;
dev
->
device_pause
=
d40_pause
;
dev
->
device_resume
=
d40_resume
;
dev
->
device_terminate_all
=
d40_terminate_all
;
dev
->
dev
=
base
->
dev
;
}
...
...
drivers/dma/sun6i-dma.c
浏览文件 @
2cd6f792
...
...
@@ -355,38 +355,6 @@ static void sun6i_dma_free_desc(struct virt_dma_desc *vd)
kfree
(
txd
);
}
static
int
sun6i_dma_terminate_all
(
struct
sun6i_vchan
*
vchan
)
{
struct
sun6i_dma_dev
*
sdev
=
to_sun6i_dma_dev
(
vchan
->
vc
.
chan
.
device
);
struct
sun6i_pchan
*
pchan
=
vchan
->
phy
;
unsigned
long
flags
;
LIST_HEAD
(
head
);
spin_lock
(
&
sdev
->
lock
);
list_del_init
(
&
vchan
->
node
);
spin_unlock
(
&
sdev
->
lock
);
spin_lock_irqsave
(
&
vchan
->
vc
.
lock
,
flags
);
vchan_get_all_descriptors
(
&
vchan
->
vc
,
&
head
);
if
(
pchan
)
{
writel
(
DMA_CHAN_ENABLE_STOP
,
pchan
->
base
+
DMA_CHAN_ENABLE
);
writel
(
DMA_CHAN_PAUSE_RESUME
,
pchan
->
base
+
DMA_CHAN_PAUSE
);
vchan
->
phy
=
NULL
;
pchan
->
vchan
=
NULL
;
pchan
->
desc
=
NULL
;
pchan
->
done
=
NULL
;
}
spin_unlock_irqrestore
(
&
vchan
->
vc
.
lock
,
flags
);
vchan_dma_desc_free_list
(
&
vchan
->
vc
,
&
head
);
return
0
;
}
static
int
sun6i_dma_start_desc
(
struct
sun6i_vchan
*
vchan
)
{
struct
sun6i_dma_dev
*
sdev
=
to_sun6i_dma_dev
(
vchan
->
vc
.
chan
.
device
);
...
...
@@ -675,57 +643,92 @@ static struct dma_async_tx_descriptor *sun6i_dma_prep_slave_sg(
return
NULL
;
}
static
int
sun6i_dma_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
sun6i_dma_config
(
struct
dma_chan
*
chan
,
struct
dma_slave_config
*
config
)
{
struct
sun6i_vchan
*
vchan
=
to_sun6i_vchan
(
chan
);
memcpy
(
&
vchan
->
cfg
,
config
,
sizeof
(
*
config
));
return
0
;
}
static
int
sun6i_dma_pause
(
struct
dma_chan
*
chan
)
{
struct
sun6i_dma_dev
*
sdev
=
to_sun6i_dma_dev
(
chan
->
device
);
struct
sun6i_vchan
*
vchan
=
to_sun6i_vchan
(
chan
);
struct
sun6i_pchan
*
pchan
=
vchan
->
phy
;
dev_dbg
(
chan2dev
(
chan
),
"vchan %p: pause
\n
"
,
&
vchan
->
vc
);
if
(
pchan
)
{
writel
(
DMA_CHAN_PAUSE_PAUSE
,
pchan
->
base
+
DMA_CHAN_PAUSE
);
}
else
{
spin_lock
(
&
sdev
->
lock
);
list_del_init
(
&
vchan
->
node
);
spin_unlock
(
&
sdev
->
lock
);
}
return
0
;
}
static
int
sun6i_dma_resume
(
struct
dma_chan
*
chan
)
{
struct
sun6i_dma_dev
*
sdev
=
to_sun6i_dma_dev
(
chan
->
device
);
struct
sun6i_vchan
*
vchan
=
to_sun6i_vchan
(
chan
);
struct
sun6i_pchan
*
pchan
=
vchan
->
phy
;
unsigned
long
flags
;
int
ret
=
0
;
switch
(
cmd
)
{
case
DMA_RESUME
:
dev_dbg
(
chan2dev
(
chan
),
"vchan %p: resume
\n
"
,
&
vchan
->
vc
);
dev_dbg
(
chan2dev
(
chan
),
"vchan %p: resume
\n
"
,
&
vchan
->
vc
);
spin_lock_irqsave
(
&
vchan
->
vc
.
lock
,
flags
);
spin_lock_irqsave
(
&
vchan
->
vc
.
lock
,
flags
);
if
(
pchan
)
{
writel
(
DMA_CHAN_PAUSE_RESUME
,
pchan
->
base
+
DMA_CHAN_PAUSE
);
}
else
if
(
!
list_empty
(
&
vchan
->
vc
.
desc_issued
))
{
spin_lock
(
&
sdev
->
lock
);
list_add_tail
(
&
vchan
->
node
,
&
sdev
->
pending
);
spin_unlock
(
&
sdev
->
lock
);
}
if
(
pchan
)
{
writel
(
DMA_CHAN_PAUSE_RESUME
,
pchan
->
base
+
DMA_CHAN_PAUSE
);
}
else
if
(
!
list_empty
(
&
vchan
->
vc
.
desc_issued
))
{
spin_lock
(
&
sdev
->
lock
);
list_add_tail
(
&
vchan
->
node
,
&
sdev
->
pending
);
spin_unlock
(
&
sdev
->
lock
);
}
spin_unlock_irqrestore
(
&
vchan
->
vc
.
lock
,
flags
);
break
;
spin_unlock_irqrestore
(
&
vchan
->
vc
.
lock
,
flags
);
case
DMA_PAUSE
:
dev_dbg
(
chan2dev
(
chan
),
"vchan %p: pause
\n
"
,
&
vchan
->
vc
);
return
0
;
}
if
(
pchan
)
{
writel
(
DMA_CHAN_PAUSE_PAUSE
,
pchan
->
base
+
DMA_CHAN_PAUSE
);
}
else
{
spin_lock
(
&
sdev
->
lock
);
list_del_init
(
&
vchan
->
node
);
spin_unlock
(
&
sdev
->
lock
);
}
break
;
case
DMA_TERMINATE_ALL
:
ret
=
sun6i_dma_terminate_all
(
vchan
);
break
;
case
DMA_SLAVE_CONFIG
:
memcpy
(
&
vchan
->
cfg
,
(
void
*
)
arg
,
sizeof
(
struct
dma_slave_config
));
break
;
default:
ret
=
-
ENXIO
;
break
;
static
int
sun6i_dma_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
sun6i_dma_dev
*
sdev
=
to_sun6i_dma_dev
(
chan
->
device
);
struct
sun6i_vchan
*
vchan
=
to_sun6i_vchan
(
chan
);
struct
sun6i_pchan
*
pchan
=
vchan
->
phy
;
unsigned
long
flags
;
LIST_HEAD
(
head
);
spin_lock
(
&
sdev
->
lock
);
list_del_init
(
&
vchan
->
node
);
spin_unlock
(
&
sdev
->
lock
);
spin_lock_irqsave
(
&
vchan
->
vc
.
lock
,
flags
);
vchan_get_all_descriptors
(
&
vchan
->
vc
,
&
head
);
if
(
pchan
)
{
writel
(
DMA_CHAN_ENABLE_STOP
,
pchan
->
base
+
DMA_CHAN_ENABLE
);
writel
(
DMA_CHAN_PAUSE_RESUME
,
pchan
->
base
+
DMA_CHAN_PAUSE
);
vchan
->
phy
=
NULL
;
pchan
->
vchan
=
NULL
;
pchan
->
desc
=
NULL
;
pchan
->
done
=
NULL
;
}
return
ret
;
spin_unlock_irqrestore
(
&
vchan
->
vc
.
lock
,
flags
);
vchan_dma_desc_free_list
(
&
vchan
->
vc
,
&
head
);
return
0
;
}
static
enum
dma_status
sun6i_dma_tx_status
(
struct
dma_chan
*
chan
,
...
...
@@ -960,9 +963,20 @@ static int sun6i_dma_probe(struct platform_device *pdev)
sdc
->
slave
.
device_issue_pending
=
sun6i_dma_issue_pending
;
sdc
->
slave
.
device_prep_slave_sg
=
sun6i_dma_prep_slave_sg
;
sdc
->
slave
.
device_prep_dma_memcpy
=
sun6i_dma_prep_dma_memcpy
;
sdc
->
slave
.
device_control
=
sun6i_dma_control
;
sdc
->
slave
.
copy_align
=
4
;
sdc
->
slave
.
device_config
=
sun6i_dma_config
;
sdc
->
slave
.
device_pause
=
sun6i_dma_pause
;
sdc
->
slave
.
device_resume
=
sun6i_dma_resume
;
sdc
->
slave
.
device_terminate_all
=
sun6i_dma_terminate_all
;
sdc
->
slave
.
src_addr_widths
=
BIT
(
DMA_SLAVE_BUSWIDTH_1_BYTE
)
|
BIT
(
DMA_SLAVE_BUSWIDTH_2_BYTES
)
|
BIT
(
DMA_SLAVE_BUSWIDTH_4_BYTES
);
sdc
->
slave
.
dst_addr_widths
=
BIT
(
DMA_SLAVE_BUSWIDTH_1_BYTE
)
|
BIT
(
DMA_SLAVE_BUSWIDTH_2_BYTES
)
|
BIT
(
DMA_SLAVE_BUSWIDTH_4_BYTES
);
sdc
->
slave
.
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
sdc
->
slave
.
residue_granularity
=
DMA_RESIDUE_GRANULARITY_BURST
;
sdc
->
slave
.
dev
=
&
pdev
->
dev
;
sdc
->
pchans
=
devm_kcalloc
(
&
pdev
->
dev
,
sdc
->
cfg
->
nr_max_channels
,
...
...
drivers/dma/tegra20-apb-dma.c
浏览文件 @
2cd6f792
...
...
@@ -723,7 +723,7 @@ static void tegra_dma_issue_pending(struct dma_chan *dc)
return
;
}
static
void
tegra_dma_terminate_all
(
struct
dma_chan
*
dc
)
static
int
tegra_dma_terminate_all
(
struct
dma_chan
*
dc
)
{
struct
tegra_dma_channel
*
tdc
=
to_tegra_dma_chan
(
dc
);
struct
tegra_dma_sg_req
*
sgreq
;
...
...
@@ -736,7 +736,7 @@ static void tegra_dma_terminate_all(struct dma_chan *dc)
spin_lock_irqsave
(
&
tdc
->
lock
,
flags
);
if
(
list_empty
(
&
tdc
->
pending_sg_req
))
{
spin_unlock_irqrestore
(
&
tdc
->
lock
,
flags
);
return
;
return
0
;
}
if
(
!
tdc
->
busy
)
...
...
@@ -777,6 +777,7 @@ static void tegra_dma_terminate_all(struct dma_chan *dc)
dma_desc
->
cb_count
=
0
;
}
spin_unlock_irqrestore
(
&
tdc
->
lock
,
flags
);
return
0
;
}
static
enum
dma_status
tegra_dma_tx_status
(
struct
dma_chan
*
dc
,
...
...
@@ -827,25 +828,6 @@ static enum dma_status tegra_dma_tx_status(struct dma_chan *dc,
return
ret
;
}
static
int
tegra_dma_device_control
(
struct
dma_chan
*
dc
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
{
switch
(
cmd
)
{
case
DMA_SLAVE_CONFIG
:
return
tegra_dma_slave_config
(
dc
,
(
struct
dma_slave_config
*
)
arg
);
case
DMA_TERMINATE_ALL
:
tegra_dma_terminate_all
(
dc
);
return
0
;
default:
break
;
}
return
-
ENXIO
;
}
static
inline
int
get_bus_width
(
struct
tegra_dma_channel
*
tdc
,
enum
dma_slave_buswidth
slave_bw
)
{
...
...
@@ -1443,7 +1425,23 @@ static int tegra_dma_probe(struct platform_device *pdev)
tegra_dma_free_chan_resources
;
tdma
->
dma_dev
.
device_prep_slave_sg
=
tegra_dma_prep_slave_sg
;
tdma
->
dma_dev
.
device_prep_dma_cyclic
=
tegra_dma_prep_dma_cyclic
;
tdma
->
dma_dev
.
device_control
=
tegra_dma_device_control
;
tdma
->
dma_dev
.
src_addr_widths
=
BIT
(
DMA_SLAVE_BUSWIDTH_1_BYTE
)
|
BIT
(
DMA_SLAVE_BUSWIDTH_2_BYTES
)
|
BIT
(
DMA_SLAVE_BUSWIDTH_4_BYTES
)
|
BIT
(
DMA_SLAVE_BUSWIDTH_8_BYTES
);
tdma
->
dma_dev
.
dst_addr_widths
=
BIT
(
DMA_SLAVE_BUSWIDTH_1_BYTE
)
|
BIT
(
DMA_SLAVE_BUSWIDTH_2_BYTES
)
|
BIT
(
DMA_SLAVE_BUSWIDTH_4_BYTES
)
|
BIT
(
DMA_SLAVE_BUSWIDTH_8_BYTES
);
tdma
->
dma_dev
.
directions
=
BIT
(
DMA_DEV_TO_MEM
)
|
BIT
(
DMA_MEM_TO_DEV
);
/*
* XXX The hardware appears to support
* DMA_RESIDUE_GRANULARITY_BURST-level reporting, but it's
* only used by this driver during tegra_dma_terminate_all()
*/
tdma
->
dma_dev
.
residue_granularity
=
DMA_RESIDUE_GRANULARITY_SEGMENT
;
tdma
->
dma_dev
.
device_config
=
tegra_dma_slave_config
;
tdma
->
dma_dev
.
device_terminate_all
=
tegra_dma_terminate_all
;
tdma
->
dma_dev
.
device_tx_status
=
tegra_dma_tx_status
;
tdma
->
dma_dev
.
device_issue_pending
=
tegra_dma_issue_pending
;
...
...
drivers/dma/timb_dma.c
浏览文件 @
2cd6f792
...
...
@@ -561,8 +561,7 @@ static struct dma_async_tx_descriptor *td_prep_slave_sg(struct dma_chan *chan,
return
&
td_desc
->
txd
;
}
static
int
td_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
td_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
timb_dma_chan
*
td_chan
=
container_of
(
chan
,
struct
timb_dma_chan
,
chan
);
...
...
@@ -570,9 +569,6 @@ static int td_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
dev_dbg
(
chan2dev
(
chan
),
"%s: Entry
\n
"
,
__func__
);
if
(
cmd
!=
DMA_TERMINATE_ALL
)
return
-
ENXIO
;
/* first the easy part, put the queue into the free list */
spin_lock_bh
(
&
td_chan
->
lock
);
list_for_each_entry_safe
(
td_desc
,
_td_desc
,
&
td_chan
->
queue
,
...
...
@@ -697,7 +693,7 @@ static int td_probe(struct platform_device *pdev)
dma_cap_set
(
DMA_SLAVE
,
td
->
dma
.
cap_mask
);
dma_cap_set
(
DMA_PRIVATE
,
td
->
dma
.
cap_mask
);
td
->
dma
.
device_prep_slave_sg
=
td_prep_slave_sg
;
td
->
dma
.
device_
control
=
td_contro
l
;
td
->
dma
.
device_
terminate_all
=
td_terminate_al
l
;
td
->
dma
.
dev
=
&
pdev
->
dev
;
...
...
drivers/dma/txx9dmac.c
浏览文件 @
2cd6f792
...
...
@@ -901,17 +901,12 @@ txx9dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
return
&
first
->
txd
;
}
static
int
txx9dmac_control
(
struct
dma_chan
*
chan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
txx9dmac_terminate_all
(
struct
dma_chan
*
chan
)
{
struct
txx9dmac_chan
*
dc
=
to_txx9dmac_chan
(
chan
);
struct
txx9dmac_desc
*
desc
,
*
_desc
;
LIST_HEAD
(
list
);
/* Only supports DMA_TERMINATE_ALL */
if
(
cmd
!=
DMA_TERMINATE_ALL
)
return
-
EINVAL
;
dev_vdbg
(
chan2dev
(
chan
),
"terminate_all
\n
"
);
spin_lock_bh
(
&
dc
->
lock
);
...
...
@@ -1109,7 +1104,7 @@ static int __init txx9dmac_chan_probe(struct platform_device *pdev)
dc
->
dma
.
dev
=
&
pdev
->
dev
;
dc
->
dma
.
device_alloc_chan_resources
=
txx9dmac_alloc_chan_resources
;
dc
->
dma
.
device_free_chan_resources
=
txx9dmac_free_chan_resources
;
dc
->
dma
.
device_
control
=
txx9dmac_contro
l
;
dc
->
dma
.
device_
terminate_all
=
txx9dmac_terminate_al
l
;
dc
->
dma
.
device_tx_status
=
txx9dmac_tx_status
;
dc
->
dma
.
device_issue_pending
=
txx9dmac_issue_pending
;
if
(
pdata
&&
pdata
->
memcpy_chan
==
ch
)
{
...
...
drivers/dma/xilinx/xilinx_vdma.c
浏览文件 @
2cd6f792
...
...
@@ -1001,13 +1001,17 @@ xilinx_vdma_dma_prep_interleaved(struct dma_chan *dchan,
* xilinx_vdma_terminate_all - Halt the channel and free descriptors
* @chan: Driver specific VDMA Channel pointer
*/
static
void
xilinx_vdma_terminate_all
(
struct
xilinx_vdma_chan
*
chan
)
static
int
xilinx_vdma_terminate_all
(
struct
dma_chan
*
d
chan
)
{
struct
xilinx_vdma_chan
*
chan
=
to_xilinx_chan
(
dchan
);
/* Halt the DMA engine */
xilinx_vdma_halt
(
chan
);
/* Remove and free all of the descriptors in the lists */
xilinx_vdma_free_descriptors
(
chan
);
return
0
;
}
/**
...
...
@@ -1075,27 +1079,6 @@ int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
}
EXPORT_SYMBOL
(
xilinx_vdma_channel_set_config
);
/**
* xilinx_vdma_device_control - Configure DMA channel of the device
* @dchan: DMA Channel pointer
* @cmd: DMA control command
* @arg: Channel configuration
*
* Return: '0' on success and failure value on error
*/
static
int
xilinx_vdma_device_control
(
struct
dma_chan
*
dchan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
{
struct
xilinx_vdma_chan
*
chan
=
to_xilinx_chan
(
dchan
);
if
(
cmd
!=
DMA_TERMINATE_ALL
)
return
-
ENXIO
;
xilinx_vdma_terminate_all
(
chan
);
return
0
;
}
/* -----------------------------------------------------------------------------
* Probe and remove
*/
...
...
@@ -1300,7 +1283,7 @@ static int xilinx_vdma_probe(struct platform_device *pdev)
xilinx_vdma_free_chan_resources
;
xdev
->
common
.
device_prep_interleaved_dma
=
xilinx_vdma_dma_prep_interleaved
;
xdev
->
common
.
device_
control
=
xilinx_vdma_device_contro
l
;
xdev
->
common
.
device_
terminate_all
=
xilinx_vdma_terminate_al
l
;
xdev
->
common
.
device_tx_status
=
xilinx_vdma_tx_status
;
xdev
->
common
.
device_issue_pending
=
xilinx_vdma_issue_pending
;
...
...
drivers/rapidio/devices/tsi721_dma.c
浏览文件 @
2cd6f792
...
...
@@ -815,8 +815,7 @@ struct dma_async_tx_descriptor *tsi721_prep_rio_sg(struct dma_chan *dchan,
return
txd
;
}
static
int
tsi721_device_control
(
struct
dma_chan
*
dchan
,
enum
dma_ctrl_cmd
cmd
,
unsigned
long
arg
)
static
int
tsi721_terminate_all
(
struct
dma_chan
*
dchan
)
{
struct
tsi721_bdma_chan
*
bdma_chan
=
to_tsi721_chan
(
dchan
);
struct
tsi721_tx_desc
*
desc
,
*
_d
;
...
...
@@ -825,9 +824,6 @@ static int tsi721_device_control(struct dma_chan *dchan, enum dma_ctrl_cmd cmd,
dev_dbg
(
dchan
->
device
->
dev
,
"%s: Entry
\n
"
,
__func__
);
if
(
cmd
!=
DMA_TERMINATE_ALL
)
return
-
ENOSYS
;
spin_lock_bh
(
&
bdma_chan
->
lock
);
bdma_chan
->
active
=
false
;
...
...
@@ -901,7 +897,7 @@ int tsi721_register_dma(struct tsi721_device *priv)
mport
->
dma
.
device_tx_status
=
tsi721_tx_status
;
mport
->
dma
.
device_issue_pending
=
tsi721_issue_pending
;
mport
->
dma
.
device_prep_slave_sg
=
tsi721_prep_rio_sg
;
mport
->
dma
.
device_
control
=
tsi721_device_contro
l
;
mport
->
dma
.
device_
terminate_all
=
tsi721_terminate_al
l
;
err
=
dma_async_device_register
(
&
mport
->
dma
);
if
(
err
)
...
...
include/linux/dmaengine.h
浏览文件 @
2cd6f792
此差异已折叠。
点击以展开。
sound/soc/soc-generic-dmaengine-pcm.c
浏览文件 @
2cd6f792
...
...
@@ -151,7 +151,7 @@ static int dmaengine_pcm_set_runtime_hwparams(struct snd_pcm_substream *substrea
hw
.
info
|=
SNDRV_PCM_INFO_BATCH
;
if
(
substream
->
stream
==
SNDRV_PCM_STREAM_PLAYBACK
)
addr_widths
=
dma_caps
.
dst
n
_addr_widths
;
addr_widths
=
dma_caps
.
dst_addr_widths
;
else
addr_widths
=
dma_caps
.
src_addr_widths
;
}
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录