Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
OpenHarmony
kernel_linux
提交
b7f797cb
K
kernel_linux
项目概览
OpenHarmony
/
kernel_linux
上一次同步 4 年多
通知
15
Star
8
Fork
2
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
K
kernel_linux
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
提交
b7f797cb
编写于
6月 22, 2009
作者:
I
Ingo Molnar
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'for-tip' of
git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu
into x86/urgent
上级
99bd0c0f
0017c869
变更
54
显示空白变更内容
内联
并排
Showing
54 changed file
with
2528 addition
and
282 deletion
+2528
-282
Documentation/kernel-parameters.txt
Documentation/kernel-parameters.txt
+6
-0
Documentation/powerpc/dts-bindings/fsl/esdhc.txt
Documentation/powerpc/dts-bindings/fsl/esdhc.txt
+2
-0
Documentation/sound/alsa/HD-Audio-Models.txt
Documentation/sound/alsa/HD-Audio-Models.txt
+1
-0
MAINTAINERS
MAINTAINERS
+22
-0
arch/alpha/mm/fault.c
arch/alpha/mm/fault.c
+1
-1
arch/arm/mm/fault.c
arch/arm/mm/fault.c
+1
-1
arch/avr32/mm/fault.c
arch/avr32/mm/fault.c
+1
-1
arch/cris/mm/fault.c
arch/cris/mm/fault.c
+1
-1
arch/frv/mm/fault.c
arch/frv/mm/fault.c
+1
-1
arch/ia64/mm/fault.c
arch/ia64/mm/fault.c
+1
-1
arch/m32r/mm/fault.c
arch/m32r/mm/fault.c
+1
-1
arch/m68k/mm/fault.c
arch/m68k/mm/fault.c
+1
-1
arch/microblaze/mm/fault.c
arch/microblaze/mm/fault.c
+1
-1
arch/mips/mm/fault.c
arch/mips/mm/fault.c
+1
-1
arch/mn10300/mm/fault.c
arch/mn10300/mm/fault.c
+1
-1
arch/parisc/mm/fault.c
arch/parisc/mm/fault.c
+1
-1
arch/powerpc/mm/fault.c
arch/powerpc/mm/fault.c
+1
-1
arch/powerpc/platforms/cell/spu_fault.c
arch/powerpc/platforms/cell/spu_fault.c
+1
-1
arch/s390/lib/uaccess_pt.c
arch/s390/lib/uaccess_pt.c
+1
-1
arch/s390/mm/fault.c
arch/s390/mm/fault.c
+1
-1
arch/sh/mm/fault_32.c
arch/sh/mm/fault_32.c
+1
-1
arch/sh/mm/tlbflush_64.c
arch/sh/mm/tlbflush_64.c
+1
-1
arch/sparc/mm/fault_32.c
arch/sparc/mm/fault_32.c
+2
-2
arch/sparc/mm/fault_64.c
arch/sparc/mm/fault_64.c
+1
-1
arch/um/kernel/trap.c
arch/um/kernel/trap.c
+1
-1
arch/x86/crypto/aesni-intel_asm.S
arch/x86/crypto/aesni-intel_asm.S
+3
-2
arch/x86/crypto/aesni-intel_glue.c
arch/x86/crypto/aesni-intel_glue.c
+4
-0
arch/x86/crypto/fpu.c
arch/x86/crypto/fpu.c
+2
-2
arch/x86/include/asm/percpu.h
arch/x86/include/asm/percpu.h
+10
-0
arch/x86/kernel/setup_percpu.c
arch/x86/kernel/setup_percpu.c
+162
-57
arch/x86/mm/fault.c
arch/x86/mm/fault.c
+1
-1
arch/x86/mm/pageattr.c
arch/x86/mm/pageattr.c
+40
-25
arch/xtensa/mm/fault.c
arch/xtensa/mm/fault.c
+1
-1
drivers/crypto/padlock-aes.c
drivers/crypto/padlock-aes.c
+98
-40
drivers/mmc/host/Kconfig
drivers/mmc/host/Kconfig
+36
-0
drivers/mmc/host/Makefile
drivers/mmc/host/Makefile
+2
-0
drivers/mmc/host/s3cmci.c
drivers/mmc/host/s3cmci.c
+1
-1
drivers/mmc/host/sdhci-of.c
drivers/mmc/host/sdhci-of.c
+3
-0
drivers/mmc/host/sdhci-pci.c
drivers/mmc/host/sdhci-pci.c
+20
-0
drivers/mmc/host/sdhci-s3c.c
drivers/mmc/host/sdhci-s3c.c
+428
-0
drivers/mmc/host/sdhci.c
drivers/mmc/host/sdhci.c
+47
-5
drivers/mmc/host/sdhci.h
drivers/mmc/host/sdhci.h
+6
-0
drivers/mmc/host/via-sdmmc.c
drivers/mmc/host/via-sdmmc.c
+1362
-0
include/linux/mm.h
include/linux/mm.h
+2
-2
ipc/util.h
ipc/util.h
+1
-0
lib/Kconfig.debug
lib/Kconfig.debug
+1
-1
lib/dma-debug.c
lib/dma-debug.c
+98
-51
mm/memory.c
mm/memory.c
+24
-24
mm/percpu.c
mm/percpu.c
+14
-10
sound/pci/hda/hda_codec.c
sound/pci/hda/hda_codec.c
+0
-2
sound/pci/hda/patch_realtek.c
sound/pci/hda/patch_realtek.c
+103
-31
sound/soc/txx9/txx9aclc.c
sound/soc/txx9/txx9aclc.c
+2
-2
sound/usb/caiaq/audio.c
sound/usb/caiaq/audio.c
+3
-2
sound/usb/caiaq/device.c
sound/usb/caiaq/device.c
+1
-1
未找到文件。
Documentation/kernel-parameters.txt
浏览文件 @
b7f797cb
...
@@ -1882,6 +1882,12 @@ and is between 256 and 4096 characters. It is defined in the file
...
@@ -1882,6 +1882,12 @@ and is between 256 and 4096 characters. It is defined in the file
Format: { 0 | 1 }
Format: { 0 | 1 }
See arch/parisc/kernel/pdc_chassis.c
See arch/parisc/kernel/pdc_chassis.c
percpu_alloc= [X86] Select which percpu first chunk allocator to use.
Allowed values are one of "lpage", "embed" and "4k".
See comments in arch/x86/kernel/setup_percpu.c for
details on each allocator. This parameter is primarily
for debugging and performance comparison.
pf. [PARIDE]
pf. [PARIDE]
See Documentation/blockdev/paride.txt.
See Documentation/blockdev/paride.txt.
...
...
Documentation/powerpc/dts-bindings/fsl/esdhc.txt
浏览文件 @
b7f797cb
...
@@ -10,6 +10,8 @@ Required properties:
...
@@ -10,6 +10,8 @@ Required properties:
- interrupts : should contain eSDHC interrupt.
- interrupts : should contain eSDHC interrupt.
- interrupt-parent : interrupt source phandle.
- interrupt-parent : interrupt source phandle.
- clock-frequency : specifies eSDHC base clock frequency.
- clock-frequency : specifies eSDHC base clock frequency.
- sdhci,1-bit-only : (optional) specifies that a controller can
only handle 1-bit data transfers.
Example:
Example:
...
...
Documentation/sound/alsa/HD-Audio-Models.txt
浏览文件 @
b7f797cb
...
@@ -139,6 +139,7 @@ ALC883/888
...
@@ -139,6 +139,7 @@ ALC883/888
acer Acer laptops (Travelmate 3012WTMi, Aspire 5600, etc)
acer Acer laptops (Travelmate 3012WTMi, Aspire 5600, etc)
acer-aspire Acer Aspire 9810
acer-aspire Acer Aspire 9810
acer-aspire-4930g Acer Aspire 4930G
acer-aspire-4930g Acer Aspire 4930G
acer-aspire-6530g Acer Aspire 6530G
acer-aspire-8930g Acer Aspire 8930G
acer-aspire-8930g Acer Aspire 8930G
medion Medion Laptops
medion Medion Laptops
medion-md2 Medion MD2
medion-md2 Medion MD2
...
...
MAINTAINERS
浏览文件 @
b7f797cb
...
@@ -1010,6 +1010,13 @@ W: http://www.at91.com/
...
@@ -1010,6 +1010,13 @@ W: http://www.at91.com/
S: Maintained
S: Maintained
F: drivers/mmc/host/at91_mci.c
F: drivers/mmc/host/at91_mci.c
ATMEL AT91 / AT32 MCI DRIVER
P: Nicolas Ferre
M: nicolas.ferre@atmel.com
S: Maintained
F: drivers/mmc/host/atmel-mci.c
F: drivers/mmc/host/atmel-mci-regs.h
ATMEL AT91 / AT32 SERIAL DRIVER
ATMEL AT91 / AT32 SERIAL DRIVER
P: Haavard Skinnemoen
P: Haavard Skinnemoen
M: hskinnemoen@atmel.com
M: hskinnemoen@atmel.com
...
@@ -5094,6 +5101,13 @@ L: sdhci-devel@lists.ossman.eu
...
@@ -5094,6 +5101,13 @@ L: sdhci-devel@lists.ossman.eu
S: Maintained
S: Maintained
F: drivers/mmc/host/sdhci.*
F: drivers/mmc/host/sdhci.*
SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) SAMSUNG DRIVER
P: Ben Dooks
M: ben-linux@fluff.org
L: sdhci-devel@lists.ossman.eu
S: Maintained
F: drivers/mmc/host/sdhci-s3c.c
SECURITY SUBSYSTEM
SECURITY SUBSYSTEM
P: James Morris
P: James Morris
M: jmorris@namei.org
M: jmorris@namei.org
...
@@ -6216,6 +6230,14 @@ S: Maintained
...
@@ -6216,6 +6230,14 @@ S: Maintained
F: Documentation/i2c/busses/i2c-viapro
F: Documentation/i2c/busses/i2c-viapro
F: drivers/i2c/busses/i2c-viapro.c
F: drivers/i2c/busses/i2c-viapro.c
VIA SD/MMC CARD CONTROLLER DRIVER
P: Joseph Chan
M: JosephChan@via.com.tw
P: Harald Welte
M: HaraldWelte@viatech.com
S: Maintained
F: drivers/mmc/host/via-sdmmc.c
VIA UNICHROME(PRO)/CHROME9 FRAMEBUFFER DRIVER
VIA UNICHROME(PRO)/CHROME9 FRAMEBUFFER DRIVER
P: Joseph Chan
P: Joseph Chan
M: JosephChan@via.com.tw
M: JosephChan@via.com.tw
...
...
arch/alpha/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -146,7 +146,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
...
@@ -146,7 +146,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
/* If for any reason at all we couldn't handle the fault,
/* If for any reason at all we couldn't handle the fault,
make sure we exit gracefully rather than endlessly redo
make sure we exit gracefully rather than endlessly redo
the fault. */
the fault. */
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
cause
>
0
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
cause
>
0
?
FAULT_FLAG_WRITE
:
0
);
up_read
(
&
mm
->
mmap_sem
);
up_read
(
&
mm
->
mmap_sem
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
...
...
arch/arm/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -208,7 +208,7 @@ __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
...
@@ -208,7 +208,7 @@ __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
* than endlessly redo the fault.
* than endlessly redo the fault.
*/
*/
survive:
survive:
fault
=
handle_mm_fault
(
mm
,
vma
,
addr
&
PAGE_MASK
,
fsr
&
(
1
<<
11
)
);
fault
=
handle_mm_fault
(
mm
,
vma
,
addr
&
PAGE_MASK
,
(
fsr
&
(
1
<<
11
))
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
...
arch/avr32/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -133,7 +133,7 @@ asmlinkage void do_page_fault(unsigned long ecr, struct pt_regs *regs)
...
@@ -133,7 +133,7 @@ asmlinkage void do_page_fault(unsigned long ecr, struct pt_regs *regs)
* fault.
* fault.
*/
*/
survive:
survive:
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
writeaccess
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
writeaccess
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
...
arch/cris/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -163,7 +163,7 @@ do_page_fault(unsigned long address, struct pt_regs *regs,
...
@@ -163,7 +163,7 @@ do_page_fault(unsigned long address, struct pt_regs *regs,
* the fault.
* the fault.
*/
*/
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
writeaccess
&
1
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
(
writeaccess
&
1
)
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
...
arch/frv/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -163,7 +163,7 @@ asmlinkage void do_page_fault(int datammu, unsigned long esr0, unsigned long ear
...
@@ -163,7 +163,7 @@ asmlinkage void do_page_fault(int datammu, unsigned long esr0, unsigned long ear
* make sure we exit gracefully rather than endlessly redo
* make sure we exit gracefully rather than endlessly redo
* the fault.
* the fault.
*/
*/
fault
=
handle_mm_fault
(
mm
,
vma
,
ear0
,
write
);
fault
=
handle_mm_fault
(
mm
,
vma
,
ear0
,
write
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
...
arch/ia64/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -154,7 +154,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re
...
@@ -154,7 +154,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re
* sure we exit gracefully rather than endlessly redo the
* sure we exit gracefully rather than endlessly redo the
* fault.
* fault.
*/
*/
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
(
mask
&
VM_WRITE
)
!=
0
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
(
mask
&
VM_WRITE
)
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
/*
/*
* We ran out of memory, or some other thing happened
* We ran out of memory, or some other thing happened
...
...
arch/m32r/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -196,7 +196,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code,
...
@@ -196,7 +196,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code,
*/
*/
addr
=
(
address
&
PAGE_MASK
);
addr
=
(
address
&
PAGE_MASK
);
set_thread_fault_code
(
error_code
);
set_thread_fault_code
(
error_code
);
fault
=
handle_mm_fault
(
mm
,
vma
,
addr
,
write
);
fault
=
handle_mm_fault
(
mm
,
vma
,
addr
,
write
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
...
arch/m68k/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -155,7 +155,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
...
@@ -155,7 +155,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
*/
*/
survive:
survive:
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
write
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
write
?
FAULT_FLAG_WRITE
:
0
);
#ifdef DEBUG
#ifdef DEBUG
printk
(
"handle_mm_fault returns %d
\n
"
,
fault
);
printk
(
"handle_mm_fault returns %d
\n
"
,
fault
);
#endif
#endif
...
...
arch/microblaze/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -232,7 +232,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address,
...
@@ -232,7 +232,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address,
* the fault.
* the fault.
*/
*/
survive:
survive:
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
is_write
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
is_write
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
...
arch/mips/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -102,7 +102,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write,
...
@@ -102,7 +102,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write,
* make sure we exit gracefully rather than endlessly redo
* make sure we exit gracefully rather than endlessly redo
* the fault.
* the fault.
*/
*/
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
write
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
write
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
...
arch/mn10300/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -258,7 +258,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long fault_code,
...
@@ -258,7 +258,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long fault_code,
* make sure we exit gracefully rather than endlessly redo
* make sure we exit gracefully rather than endlessly redo
* the fault.
* the fault.
*/
*/
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
write
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
write
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
...
arch/parisc/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -202,7 +202,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code,
...
@@ -202,7 +202,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code,
* fault.
* fault.
*/
*/
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
(
acc_type
&
VM_WRITE
)
!=
0
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
(
acc_type
&
VM_WRITE
)
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
/*
/*
* We hit a shared mapping outside of the file, or some
* We hit a shared mapping outside of the file, or some
...
...
arch/powerpc/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -302,7 +302,7 @@ int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,
...
@@ -302,7 +302,7 @@ int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,
* the fault.
* the fault.
*/
*/
survive:
survive:
ret
=
handle_mm_fault
(
mm
,
vma
,
address
,
is_write
);
ret
=
handle_mm_fault
(
mm
,
vma
,
address
,
is_write
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
ret
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
ret
&
VM_FAULT_ERROR
))
{
if
(
ret
&
VM_FAULT_OOM
)
if
(
ret
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
...
arch/powerpc/platforms/cell/spu_fault.c
浏览文件 @
b7f797cb
...
@@ -70,7 +70,7 @@ int spu_handle_mm_fault(struct mm_struct *mm, unsigned long ea,
...
@@ -70,7 +70,7 @@ int spu_handle_mm_fault(struct mm_struct *mm, unsigned long ea,
}
}
ret
=
0
;
ret
=
0
;
*
flt
=
handle_mm_fault
(
mm
,
vma
,
ea
,
is_write
);
*
flt
=
handle_mm_fault
(
mm
,
vma
,
ea
,
is_write
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
*
flt
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
*
flt
&
VM_FAULT_ERROR
))
{
if
(
*
flt
&
VM_FAULT_OOM
)
{
if
(
*
flt
&
VM_FAULT_OOM
)
{
ret
=
-
ENOMEM
;
ret
=
-
ENOMEM
;
...
...
arch/s390/lib/uaccess_pt.c
浏览文件 @
b7f797cb
...
@@ -66,7 +66,7 @@ static int __handle_fault(struct mm_struct *mm, unsigned long address,
...
@@ -66,7 +66,7 @@ static int __handle_fault(struct mm_struct *mm, unsigned long address,
}
}
survive:
survive:
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
write_access
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
write_access
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
...
arch/s390/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -352,7 +352,7 @@ do_exception(struct pt_regs *regs, unsigned long error_code, int write)
...
@@ -352,7 +352,7 @@ do_exception(struct pt_regs *regs, unsigned long error_code, int write)
* make sure we exit gracefully rather than endlessly redo
* make sure we exit gracefully rather than endlessly redo
* the fault.
* the fault.
*/
*/
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
write
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
write
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
{
if
(
fault
&
VM_FAULT_OOM
)
{
up_read
(
&
mm
->
mmap_sem
);
up_read
(
&
mm
->
mmap_sem
);
...
...
arch/sh/mm/fault_32.c
浏览文件 @
b7f797cb
...
@@ -133,7 +133,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
...
@@ -133,7 +133,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
* the fault.
* the fault.
*/
*/
survive:
survive:
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
writeaccess
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
writeaccess
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
...
arch/sh/mm/tlbflush_64.c
浏览文件 @
b7f797cb
...
@@ -187,7 +187,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long writeaccess,
...
@@ -187,7 +187,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long writeaccess,
* the fault.
* the fault.
*/
*/
survive:
survive:
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
writeaccess
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
writeaccess
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
...
arch/sparc/mm/fault_32.c
浏览文件 @
b7f797cb
...
@@ -241,7 +241,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
...
@@ -241,7 +241,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
* make sure we exit gracefully rather than endlessly redo
* make sure we exit gracefully rather than endlessly redo
* the fault.
* the fault.
*/
*/
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
write
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
write
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
@@ -484,7 +484,7 @@ static void force_user_fault(unsigned long address, int write)
...
@@ -484,7 +484,7 @@ static void force_user_fault(unsigned long address, int write)
if
(
!
(
vma
->
vm_flags
&
(
VM_READ
|
VM_EXEC
)))
if
(
!
(
vma
->
vm_flags
&
(
VM_READ
|
VM_EXEC
)))
goto
bad_area
;
goto
bad_area
;
}
}
switch
(
handle_mm_fault
(
mm
,
vma
,
address
,
write
))
{
switch
(
handle_mm_fault
(
mm
,
vma
,
address
,
write
?
FAULT_FLAG_WRITE
:
0
))
{
case
VM_FAULT_SIGBUS
:
case
VM_FAULT_SIGBUS
:
case
VM_FAULT_OOM
:
case
VM_FAULT_OOM
:
goto
do_sigbus
;
goto
do_sigbus
;
...
...
arch/sparc/mm/fault_64.c
浏览文件 @
b7f797cb
...
@@ -398,7 +398,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs)
...
@@ -398,7 +398,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs)
goto
bad_area
;
goto
bad_area
;
}
}
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
(
fault_code
&
FAULT_CODE_WRITE
));
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
(
fault_code
&
FAULT_CODE_WRITE
)
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
...
arch/um/kernel/trap.c
浏览文件 @
b7f797cb
...
@@ -65,7 +65,7 @@ int handle_page_fault(unsigned long address, unsigned long ip,
...
@@ -65,7 +65,7 @@ int handle_page_fault(unsigned long address, unsigned long ip,
do
{
do
{
int
fault
;
int
fault
;
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
is_write
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
is_write
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
{
if
(
fault
&
VM_FAULT_OOM
)
{
goto
out_of_memory
;
goto
out_of_memory
;
...
...
arch/x86/crypto/aesni-intel_asm.S
浏览文件 @
b7f797cb
...
@@ -845,7 +845,7 @@ ENTRY(aesni_cbc_enc)
...
@@ -845,7 +845,7 @@ ENTRY(aesni_cbc_enc)
*/
*/
ENTRY
(
aesni_cbc_dec
)
ENTRY
(
aesni_cbc_dec
)
cmp
$
16
,
LEN
cmp
$
16
,
LEN
jb
.
Lcbc_dec_ret
jb
.
Lcbc_dec_
just_
ret
mov
480
(
KEYP
),
KLEN
mov
480
(
KEYP
),
KLEN
add
$
240
,
KEYP
add
$
240
,
KEYP
movups
(
IVP
),
IV
movups
(
IVP
),
IV
...
@@ -891,6 +891,7 @@ ENTRY(aesni_cbc_dec)
...
@@ -891,6 +891,7 @@ ENTRY(aesni_cbc_dec)
add
$
16
,
OUTP
add
$
16
,
OUTP
cmp
$
16
,
LEN
cmp
$
16
,
LEN
jge
.
Lcbc_dec_loop1
jge
.
Lcbc_dec_loop1
movups
IV
,
(
IVP
)
.
Lcbc_dec_ret
:
.
Lcbc_dec_ret
:
movups
IV
,
(
IVP
)
.
Lcbc_dec_just_ret
:
ret
ret
arch/x86/crypto/aesni-intel_glue.c
浏览文件 @
b7f797cb
...
@@ -198,6 +198,7 @@ static int ecb_encrypt(struct blkcipher_desc *desc,
...
@@ -198,6 +198,7 @@ static int ecb_encrypt(struct blkcipher_desc *desc,
blkcipher_walk_init
(
&
walk
,
dst
,
src
,
nbytes
);
blkcipher_walk_init
(
&
walk
,
dst
,
src
,
nbytes
);
err
=
blkcipher_walk_virt
(
desc
,
&
walk
);
err
=
blkcipher_walk_virt
(
desc
,
&
walk
);
desc
->
flags
&=
~
CRYPTO_TFM_REQ_MAY_SLEEP
;
kernel_fpu_begin
();
kernel_fpu_begin
();
while
((
nbytes
=
walk
.
nbytes
))
{
while
((
nbytes
=
walk
.
nbytes
))
{
...
@@ -221,6 +222,7 @@ static int ecb_decrypt(struct blkcipher_desc *desc,
...
@@ -221,6 +222,7 @@ static int ecb_decrypt(struct blkcipher_desc *desc,
blkcipher_walk_init
(
&
walk
,
dst
,
src
,
nbytes
);
blkcipher_walk_init
(
&
walk
,
dst
,
src
,
nbytes
);
err
=
blkcipher_walk_virt
(
desc
,
&
walk
);
err
=
blkcipher_walk_virt
(
desc
,
&
walk
);
desc
->
flags
&=
~
CRYPTO_TFM_REQ_MAY_SLEEP
;
kernel_fpu_begin
();
kernel_fpu_begin
();
while
((
nbytes
=
walk
.
nbytes
))
{
while
((
nbytes
=
walk
.
nbytes
))
{
...
@@ -266,6 +268,7 @@ static int cbc_encrypt(struct blkcipher_desc *desc,
...
@@ -266,6 +268,7 @@ static int cbc_encrypt(struct blkcipher_desc *desc,
blkcipher_walk_init
(
&
walk
,
dst
,
src
,
nbytes
);
blkcipher_walk_init
(
&
walk
,
dst
,
src
,
nbytes
);
err
=
blkcipher_walk_virt
(
desc
,
&
walk
);
err
=
blkcipher_walk_virt
(
desc
,
&
walk
);
desc
->
flags
&=
~
CRYPTO_TFM_REQ_MAY_SLEEP
;
kernel_fpu_begin
();
kernel_fpu_begin
();
while
((
nbytes
=
walk
.
nbytes
))
{
while
((
nbytes
=
walk
.
nbytes
))
{
...
@@ -289,6 +292,7 @@ static int cbc_decrypt(struct blkcipher_desc *desc,
...
@@ -289,6 +292,7 @@ static int cbc_decrypt(struct blkcipher_desc *desc,
blkcipher_walk_init
(
&
walk
,
dst
,
src
,
nbytes
);
blkcipher_walk_init
(
&
walk
,
dst
,
src
,
nbytes
);
err
=
blkcipher_walk_virt
(
desc
,
&
walk
);
err
=
blkcipher_walk_virt
(
desc
,
&
walk
);
desc
->
flags
&=
~
CRYPTO_TFM_REQ_MAY_SLEEP
;
kernel_fpu_begin
();
kernel_fpu_begin
();
while
((
nbytes
=
walk
.
nbytes
))
{
while
((
nbytes
=
walk
.
nbytes
))
{
...
...
arch/x86/crypto/fpu.c
浏览文件 @
b7f797cb
...
@@ -48,7 +48,7 @@ static int crypto_fpu_encrypt(struct blkcipher_desc *desc_in,
...
@@ -48,7 +48,7 @@ static int crypto_fpu_encrypt(struct blkcipher_desc *desc_in,
struct
blkcipher_desc
desc
=
{
struct
blkcipher_desc
desc
=
{
.
tfm
=
child
,
.
tfm
=
child
,
.
info
=
desc_in
->
info
,
.
info
=
desc_in
->
info
,
.
flags
=
desc_in
->
flags
,
.
flags
=
desc_in
->
flags
&
~
CRYPTO_TFM_REQ_MAY_SLEEP
,
};
};
kernel_fpu_begin
();
kernel_fpu_begin
();
...
@@ -67,7 +67,7 @@ static int crypto_fpu_decrypt(struct blkcipher_desc *desc_in,
...
@@ -67,7 +67,7 @@ static int crypto_fpu_decrypt(struct blkcipher_desc *desc_in,
struct
blkcipher_desc
desc
=
{
struct
blkcipher_desc
desc
=
{
.
tfm
=
child
,
.
tfm
=
child
,
.
info
=
desc_in
->
info
,
.
info
=
desc_in
->
info
,
.
flags
=
desc_in
->
flags
,
.
flags
=
desc_in
->
flags
&
~
CRYPTO_TFM_REQ_MAY_SLEEP
,
};
};
kernel_fpu_begin
();
kernel_fpu_begin
();
...
...
arch/x86/include/asm/percpu.h
浏览文件 @
b7f797cb
...
@@ -42,6 +42,7 @@
...
@@ -42,6 +42,7 @@
#else
/* ...!ASSEMBLY */
#else
/* ...!ASSEMBLY */
#include <linux/kernel.h>
#include <linux/stringify.h>
#include <linux/stringify.h>
#ifdef CONFIG_SMP
#ifdef CONFIG_SMP
...
@@ -155,6 +156,15 @@ do { \
...
@@ -155,6 +156,15 @@ do { \
/* We can use this directly for local CPU (faster). */
/* We can use this directly for local CPU (faster). */
DECLARE_PER_CPU
(
unsigned
long
,
this_cpu_off
);
DECLARE_PER_CPU
(
unsigned
long
,
this_cpu_off
);
#ifdef CONFIG_NEED_MULTIPLE_NODES
void
*
pcpu_lpage_remapped
(
void
*
kaddr
);
#else
static
inline
void
*
pcpu_lpage_remapped
(
void
*
kaddr
)
{
return
NULL
;
}
#endif
#endif
/* !__ASSEMBLY__ */
#endif
/* !__ASSEMBLY__ */
#ifdef CONFIG_SMP
#ifdef CONFIG_SMP
...
...
arch/x86/kernel/setup_percpu.c
浏览文件 @
b7f797cb
...
@@ -124,7 +124,7 @@ static void * __init pcpu_alloc_bootmem(unsigned int cpu, unsigned long size,
...
@@ -124,7 +124,7 @@ static void * __init pcpu_alloc_bootmem(unsigned int cpu, unsigned long size,
}
}
/*
/*
*
R
emap allocator
*
Large page r
emap allocator
*
*
* This allocator uses PMD page as unit. A PMD page is allocated for
* This allocator uses PMD page as unit. A PMD page is allocated for
* each cpu and each is remapped into vmalloc area using PMD mapping.
* each cpu and each is remapped into vmalloc area using PMD mapping.
...
@@ -137,105 +137,185 @@ static void * __init pcpu_alloc_bootmem(unsigned int cpu, unsigned long size,
...
@@ -137,105 +137,185 @@ static void * __init pcpu_alloc_bootmem(unsigned int cpu, unsigned long size,
* better than only using 4k mappings while still being NUMA friendly.
* better than only using 4k mappings while still being NUMA friendly.
*/
*/
#ifdef CONFIG_NEED_MULTIPLE_NODES
#ifdef CONFIG_NEED_MULTIPLE_NODES
static
size_t
pcpur_size
__initdata
;
struct
pcpul_ent
{
static
void
**
pcpur_ptrs
__initdata
;
unsigned
int
cpu
;
void
*
ptr
;
};
static
size_t
pcpul_size
;
static
struct
pcpul_ent
*
pcpul_map
;
static
struct
vm_struct
pcpul_vm
;
static
struct
page
*
__init
pcpu
r
_get_page
(
unsigned
int
cpu
,
int
pageno
)
static
struct
page
*
__init
pcpu
l
_get_page
(
unsigned
int
cpu
,
int
pageno
)
{
{
size_t
off
=
(
size_t
)
pageno
<<
PAGE_SHIFT
;
size_t
off
=
(
size_t
)
pageno
<<
PAGE_SHIFT
;
if
(
off
>=
pcpu
r
_size
)
if
(
off
>=
pcpu
l
_size
)
return
NULL
;
return
NULL
;
return
virt_to_page
(
pcpu
r_ptrs
[
cpu
]
+
off
);
return
virt_to_page
(
pcpu
l_map
[
cpu
].
ptr
+
off
);
}
}
static
ssize_t
__init
setup_pcpu_
remap
(
size_t
static_size
)
static
ssize_t
__init
setup_pcpu_
lpage
(
size_t
static_size
,
bool
chosen
)
{
{
static
struct
vm_struct
vm
;
size_t
map_size
,
dyn_size
;
size_t
ptrs_size
,
dyn_size
;
unsigned
int
cpu
;
unsigned
int
cpu
;
int
i
,
j
;
ssize_t
ret
;
ssize_t
ret
;
/*
if
(
!
chosen
)
{
* If large page isn't supported, there's no benefit in doing
size_t
vm_size
=
VMALLOC_END
-
VMALLOC_START
;
* this. Also, on non-NUMA, embedding is better.
size_t
tot_size
=
num_possible_cpus
()
*
PMD_SIZE
;
*
* NOTE: disabled for now.
/* on non-NUMA, embedding is better */
*/
if
(
!
pcpu_need_numa
())
if
(
true
||
!
cpu_has_pse
||
!
pcpu_need_numa
())
return
-
EINVAL
;
return
-
EINVAL
;
/* don't consume more than 20% of vmalloc area */
if
(
tot_size
>
vm_size
/
5
)
{
pr_info
(
"PERCPU: too large chunk size %zuMB for "
"large page remap
\n
"
,
tot_size
>>
20
);
return
-
EINVAL
;
}
}
/* need PSE */
if
(
!
cpu_has_pse
)
{
pr_warning
(
"PERCPU: lpage allocator requires PSE
\n
"
);
return
-
EINVAL
;
}
/*
/*
* Currently supports only single page. Supporting multiple
* Currently supports only single page. Supporting multiple
* pages won't be too difficult if it ever becomes necessary.
* pages won't be too difficult if it ever becomes necessary.
*/
*/
pcpu
r
_size
=
PFN_ALIGN
(
static_size
+
PERCPU_MODULE_RESERVE
+
pcpu
l
_size
=
PFN_ALIGN
(
static_size
+
PERCPU_MODULE_RESERVE
+
PERCPU_DYNAMIC_RESERVE
);
PERCPU_DYNAMIC_RESERVE
);
if
(
pcpu
r
_size
>
PMD_SIZE
)
{
if
(
pcpu
l
_size
>
PMD_SIZE
)
{
pr_warning
(
"PERCPU: static data is larger than large page, "
pr_warning
(
"PERCPU: static data is larger than large page, "
"can't use large page
\n
"
);
"can't use large page
\n
"
);
return
-
EINVAL
;
return
-
EINVAL
;
}
}
dyn_size
=
pcpu
r
_size
-
static_size
-
PERCPU_FIRST_CHUNK_RESERVE
;
dyn_size
=
pcpu
l
_size
-
static_size
-
PERCPU_FIRST_CHUNK_RESERVE
;
/* allocate pointer array and alloc large pages */
/* allocate pointer array and alloc large pages */
ptrs_size
=
PFN_ALIGN
(
num_possible_cpus
()
*
sizeof
(
pcpur_ptrs
[
0
]));
map_size
=
PFN_ALIGN
(
num_possible_cpus
()
*
sizeof
(
pcpul_map
[
0
]));
pcpu
r_ptrs
=
alloc_bootmem
(
ptrs
_size
);
pcpu
l_map
=
alloc_bootmem
(
map
_size
);
for_each_possible_cpu
(
cpu
)
{
for_each_possible_cpu
(
cpu
)
{
pcpur_ptrs
[
cpu
]
=
pcpu_alloc_bootmem
(
cpu
,
PMD_SIZE
,
PMD_SIZE
);
pcpul_map
[
cpu
].
cpu
=
cpu
;
if
(
!
pcpur_ptrs
[
cpu
])
pcpul_map
[
cpu
].
ptr
=
pcpu_alloc_bootmem
(
cpu
,
PMD_SIZE
,
PMD_SIZE
);
if
(
!
pcpul_map
[
cpu
].
ptr
)
{
pr_warning
(
"PERCPU: failed to allocate large page "
"for cpu%u
\n
"
,
cpu
);
goto
enomem
;
goto
enomem
;
}
/*
/*
* Only use pcpu
r
_size bytes and give back the rest.
* Only use pcpu
l
_size bytes and give back the rest.
*
*
* Ingo: The 2MB up-rounding bootmem is needed to make
* Ingo: The 2MB up-rounding bootmem is needed to make
* sure the partial 2MB page is still fully RAM - it's
* sure the partial 2MB page is still fully RAM - it's
* not well-specified to have a PAT-incompatible area
* not well-specified to have a PAT-incompatible area
* (unmapped RAM, device memory, etc.) in that hole.
* (unmapped RAM, device memory, etc.) in that hole.
*/
*/
free_bootmem
(
__pa
(
pcpu
r_ptrs
[
cpu
]
+
pcpur
_size
),
free_bootmem
(
__pa
(
pcpu
l_map
[
cpu
].
ptr
+
pcpul
_size
),
PMD_SIZE
-
pcpu
r
_size
);
PMD_SIZE
-
pcpu
l
_size
);
memcpy
(
pcpu
r_ptrs
[
cpu
]
,
__per_cpu_load
,
static_size
);
memcpy
(
pcpu
l_map
[
cpu
].
ptr
,
__per_cpu_load
,
static_size
);
}
}
/* allocate address and map */
/* allocate address and map */
vm
.
flags
=
VM_ALLOC
;
pcpul_
vm
.
flags
=
VM_ALLOC
;
vm
.
size
=
num_possible_cpus
()
*
PMD_SIZE
;
pcpul_
vm
.
size
=
num_possible_cpus
()
*
PMD_SIZE
;
vm_area_register_early
(
&
vm
,
PMD_SIZE
);
vm_area_register_early
(
&
pcpul_
vm
,
PMD_SIZE
);
for_each_possible_cpu
(
cpu
)
{
for_each_possible_cpu
(
cpu
)
{
pmd_t
*
pmd
;
pmd_t
*
pmd
,
pmd_v
;
pmd
=
populate_extra_pmd
((
unsigned
long
)
vm
.
addr
pmd
=
populate_extra_pmd
((
unsigned
long
)
pcpul_vm
.
addr
+
+
cpu
*
PMD_SIZE
);
cpu
*
PMD_SIZE
);
set_pmd
(
pmd
,
pfn_pmd
(
page_to_pfn
(
virt_to_page
(
pcpur_ptrs
[
cpu
])),
pmd_v
=
pfn_pmd
(
page_to_pfn
(
virt_to_page
(
pcpul_map
[
cpu
].
ptr
)),
PAGE_KERNEL_LARGE
));
PAGE_KERNEL_LARGE
);
set_pmd
(
pmd
,
pmd_v
);
}
}
/* we're ready, commit */
/* we're ready, commit */
pr_info
(
"PERCPU: Remapped at %p with large pages, static data "
pr_info
(
"PERCPU: Remapped at %p with large pages, static data "
"%zu bytes
\n
"
,
vm
.
addr
,
static_size
);
"%zu bytes
\n
"
,
pcpul_
vm
.
addr
,
static_size
);
ret
=
pcpu_setup_first_chunk
(
pcpu
r
_get_page
,
static_size
,
ret
=
pcpu_setup_first_chunk
(
pcpu
l
_get_page
,
static_size
,
PERCPU_FIRST_CHUNK_RESERVE
,
dyn_size
,
PERCPU_FIRST_CHUNK_RESERVE
,
dyn_size
,
PMD_SIZE
,
vm
.
addr
,
NULL
);
PMD_SIZE
,
pcpul_vm
.
addr
,
NULL
);
goto
out_free_ar
;
/* sort pcpul_map array for pcpu_lpage_remapped() */
for
(
i
=
0
;
i
<
num_possible_cpus
()
-
1
;
i
++
)
for
(
j
=
i
+
1
;
j
<
num_possible_cpus
();
j
++
)
if
(
pcpul_map
[
i
].
ptr
>
pcpul_map
[
j
].
ptr
)
{
struct
pcpul_ent
tmp
=
pcpul_map
[
i
];
pcpul_map
[
i
]
=
pcpul_map
[
j
];
pcpul_map
[
j
]
=
tmp
;
}
return
ret
;
enomem:
enomem:
for_each_possible_cpu
(
cpu
)
for_each_possible_cpu
(
cpu
)
if
(
pcpur_ptrs
[
cpu
])
if
(
pcpul_map
[
cpu
].
ptr
)
free_bootmem
(
__pa
(
pcpur_ptrs
[
cpu
]),
PMD_SIZE
);
free_bootmem
(
__pa
(
pcpul_map
[
cpu
].
ptr
),
pcpul_size
);
ret
=
-
ENOMEM
;
free_bootmem
(
__pa
(
pcpul_map
),
map_size
);
out_free_ar:
return
-
ENOMEM
;
free_bootmem
(
__pa
(
pcpur_ptrs
),
ptrs_size
);
}
return
ret
;
/**
* pcpu_lpage_remapped - determine whether a kaddr is in pcpul recycled area
* @kaddr: the kernel address in question
*
* Determine whether @kaddr falls in the pcpul recycled area. This is
* used by pageattr to detect VM aliases and break up the pcpu PMD
* mapping such that the same physical page is not mapped under
* different attributes.
*
* The recycled area is always at the tail of a partially used PMD
* page.
*
* RETURNS:
* Address of corresponding remapped pcpu address if match is found;
* otherwise, NULL.
*/
void
*
pcpu_lpage_remapped
(
void
*
kaddr
)
{
void
*
pmd_addr
=
(
void
*
)((
unsigned
long
)
kaddr
&
PMD_MASK
);
unsigned
long
offset
=
(
unsigned
long
)
kaddr
&
~
PMD_MASK
;
int
left
=
0
,
right
=
num_possible_cpus
()
-
1
;
int
pos
;
/* pcpul in use at all? */
if
(
!
pcpul_map
)
return
NULL
;
/* okay, perform binary search */
while
(
left
<=
right
)
{
pos
=
(
left
+
right
)
/
2
;
if
(
pcpul_map
[
pos
].
ptr
<
pmd_addr
)
left
=
pos
+
1
;
else
if
(
pcpul_map
[
pos
].
ptr
>
pmd_addr
)
right
=
pos
-
1
;
else
{
/* it shouldn't be in the area for the first chunk */
WARN_ON
(
offset
<
pcpul_size
);
return
pcpul_vm
.
addr
+
pcpul_map
[
pos
].
cpu
*
PMD_SIZE
+
offset
;
}
}
return
NULL
;
}
}
#else
#else
static
ssize_t
__init
setup_pcpu_
remap
(
size_t
static_size
)
static
ssize_t
__init
setup_pcpu_
lpage
(
size_t
static_size
,
bool
chosen
)
{
{
return
-
EINVAL
;
return
-
EINVAL
;
}
}
...
@@ -249,7 +329,7 @@ static ssize_t __init setup_pcpu_remap(size_t static_size)
...
@@ -249,7 +329,7 @@ static ssize_t __init setup_pcpu_remap(size_t static_size)
* mapping so that it can use PMD mapping without additional TLB
* mapping so that it can use PMD mapping without additional TLB
* pressure.
* pressure.
*/
*/
static
ssize_t
__init
setup_pcpu_embed
(
size_t
static_size
)
static
ssize_t
__init
setup_pcpu_embed
(
size_t
static_size
,
bool
chosen
)
{
{
size_t
reserve
=
PERCPU_MODULE_RESERVE
+
PERCPU_DYNAMIC_RESERVE
;
size_t
reserve
=
PERCPU_MODULE_RESERVE
+
PERCPU_DYNAMIC_RESERVE
;
...
@@ -258,7 +338,7 @@ static ssize_t __init setup_pcpu_embed(size_t static_size)
...
@@ -258,7 +338,7 @@ static ssize_t __init setup_pcpu_embed(size_t static_size)
* this. Also, embedding allocation doesn't play well with
* this. Also, embedding allocation doesn't play well with
* NUMA.
* NUMA.
*/
*/
if
(
!
c
pu_has_pse
||
pcpu_need_numa
(
))
if
(
!
c
hosen
&&
(
!
cpu_has_pse
||
pcpu_need_numa
()
))
return
-
EINVAL
;
return
-
EINVAL
;
return
pcpu_embed_first_chunk
(
static_size
,
PERCPU_FIRST_CHUNK_RESERVE
,
return
pcpu_embed_first_chunk
(
static_size
,
PERCPU_FIRST_CHUNK_RESERVE
,
...
@@ -308,8 +388,11 @@ static ssize_t __init setup_pcpu_4k(size_t static_size)
...
@@ -308,8 +388,11 @@ static ssize_t __init setup_pcpu_4k(size_t static_size)
void
*
ptr
;
void
*
ptr
;
ptr
=
pcpu_alloc_bootmem
(
cpu
,
PAGE_SIZE
,
PAGE_SIZE
);
ptr
=
pcpu_alloc_bootmem
(
cpu
,
PAGE_SIZE
,
PAGE_SIZE
);
if
(
!
ptr
)
if
(
!
ptr
)
{
pr_warning
(
"PERCPU: failed to allocate "
"4k page for cpu%u
\n
"
,
cpu
);
goto
enomem
;
goto
enomem
;
}
memcpy
(
ptr
,
__per_cpu_load
+
i
*
PAGE_SIZE
,
PAGE_SIZE
);
memcpy
(
ptr
,
__per_cpu_load
+
i
*
PAGE_SIZE
,
PAGE_SIZE
);
pcpu4k_pages
[
j
++
]
=
virt_to_page
(
ptr
);
pcpu4k_pages
[
j
++
]
=
virt_to_page
(
ptr
);
...
@@ -333,6 +416,16 @@ static ssize_t __init setup_pcpu_4k(size_t static_size)
...
@@ -333,6 +416,16 @@ static ssize_t __init setup_pcpu_4k(size_t static_size)
return
ret
;
return
ret
;
}
}
/* for explicit first chunk allocator selection */
static
char
pcpu_chosen_alloc
[
16
]
__initdata
;
static
int
__init
percpu_alloc_setup
(
char
*
str
)
{
strncpy
(
pcpu_chosen_alloc
,
str
,
sizeof
(
pcpu_chosen_alloc
)
-
1
);
return
0
;
}
early_param
(
"percpu_alloc"
,
percpu_alloc_setup
);
static
inline
void
setup_percpu_segment
(
int
cpu
)
static
inline
void
setup_percpu_segment
(
int
cpu
)
{
{
#ifdef CONFIG_X86_32
#ifdef CONFIG_X86_32
...
@@ -346,11 +439,6 @@ static inline void setup_percpu_segment(int cpu)
...
@@ -346,11 +439,6 @@ static inline void setup_percpu_segment(int cpu)
#endif
#endif
}
}
/*
* Great future plan:
* Declare PDA itself and support (irqstack,tss,pgd) as per cpu data.
* Always point %gs to its beginning
*/
void
__init
setup_per_cpu_areas
(
void
)
void
__init
setup_per_cpu_areas
(
void
)
{
{
size_t
static_size
=
__per_cpu_end
-
__per_cpu_start
;
size_t
static_size
=
__per_cpu_end
-
__per_cpu_start
;
...
@@ -367,9 +455,26 @@ void __init setup_per_cpu_areas(void)
...
@@ -367,9 +455,26 @@ void __init setup_per_cpu_areas(void)
* of large page mappings. Please read comments on top of
* of large page mappings. Please read comments on top of
* each allocator for details.
* each allocator for details.
*/
*/
ret
=
setup_pcpu_remap
(
static_size
);
ret
=
-
EINVAL
;
if
(
strlen
(
pcpu_chosen_alloc
))
{
if
(
strcmp
(
pcpu_chosen_alloc
,
"4k"
))
{
if
(
!
strcmp
(
pcpu_chosen_alloc
,
"lpage"
))
ret
=
setup_pcpu_lpage
(
static_size
,
true
);
else
if
(
!
strcmp
(
pcpu_chosen_alloc
,
"embed"
))
ret
=
setup_pcpu_embed
(
static_size
,
true
);
else
pr_warning
(
"PERCPU: unknown allocator %s "
"specified
\n
"
,
pcpu_chosen_alloc
);
if
(
ret
<
0
)
pr_warning
(
"PERCPU: %s allocator failed (%zd), "
"falling back to 4k
\n
"
,
pcpu_chosen_alloc
,
ret
);
}
}
else
{
ret
=
setup_pcpu_lpage
(
static_size
,
false
);
if
(
ret
<
0
)
if
(
ret
<
0
)
ret
=
setup_pcpu_embed
(
static_size
);
ret
=
setup_pcpu_embed
(
static_size
,
false
);
}
if
(
ret
<
0
)
if
(
ret
<
0
)
ret
=
setup_pcpu_4k
(
static_size
);
ret
=
setup_pcpu_4k
(
static_size
);
if
(
ret
<
0
)
if
(
ret
<
0
)
...
...
arch/x86/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -1113,7 +1113,7 @@ do_page_fault(struct pt_regs *regs, unsigned long error_code)
...
@@ -1113,7 +1113,7 @@ do_page_fault(struct pt_regs *regs, unsigned long error_code)
* make sure we exit gracefully rather than endlessly redo
* make sure we exit gracefully rather than endlessly redo
* the fault:
* the fault:
*/
*/
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
write
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
write
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
mm_fault_error
(
regs
,
error_code
,
address
,
fault
);
mm_fault_error
(
regs
,
error_code
,
address
,
fault
);
...
...
arch/x86/mm/pageattr.c
浏览文件 @
b7f797cb
...
@@ -11,6 +11,7 @@
...
@@ -11,6 +11,7 @@
#include <linux/interrupt.h>
#include <linux/interrupt.h>
#include <linux/seq_file.h>
#include <linux/seq_file.h>
#include <linux/debugfs.h>
#include <linux/debugfs.h>
#include <linux/pfn.h>
#include <asm/e820.h>
#include <asm/e820.h>
#include <asm/processor.h>
#include <asm/processor.h>
...
@@ -681,8 +682,9 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias);
...
@@ -681,8 +682,9 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias);
static
int
cpa_process_alias
(
struct
cpa_data
*
cpa
)
static
int
cpa_process_alias
(
struct
cpa_data
*
cpa
)
{
{
struct
cpa_data
alias_cpa
;
struct
cpa_data
alias_cpa
;
int
ret
=
0
;
unsigned
long
laddr
=
(
unsigned
long
)
__va
(
cpa
->
pfn
<<
PAGE_SHIFT
);
unsigned
long
temp_cpa_vaddr
,
vaddr
;
unsigned
long
vaddr
,
remapped
;
int
ret
;
if
(
cpa
->
pfn
>=
max_pfn_mapped
)
if
(
cpa
->
pfn
>=
max_pfn_mapped
)
return
0
;
return
0
;
...
@@ -706,42 +708,55 @@ static int cpa_process_alias(struct cpa_data *cpa)
...
@@ -706,42 +708,55 @@ static int cpa_process_alias(struct cpa_data *cpa)
PAGE_OFFSET
+
(
max_pfn_mapped
<<
PAGE_SHIFT
))))
{
PAGE_OFFSET
+
(
max_pfn_mapped
<<
PAGE_SHIFT
))))
{
alias_cpa
=
*
cpa
;
alias_cpa
=
*
cpa
;
temp_cpa_vaddr
=
(
unsigned
long
)
__va
(
cpa
->
pfn
<<
PAGE_SHIFT
);
alias_cpa
.
vaddr
=
&
laddr
;
alias_cpa
.
vaddr
=
&
temp_cpa_vaddr
;
alias_cpa
.
flags
&=
~
(
CPA_PAGES_ARRAY
|
CPA_ARRAY
);
alias_cpa
.
flags
&=
~
(
CPA_PAGES_ARRAY
|
CPA_ARRAY
);
ret
=
__change_page_attr_set_clr
(
&
alias_cpa
,
0
);
ret
=
__change_page_attr_set_clr
(
&
alias_cpa
,
0
);
}
#ifdef CONFIG_X86_64
if
(
ret
)
if
(
ret
)
return
ret
;
return
ret
;
/*
}
* No need to redo, when the primary call touched the high
* mapping already:
*/
if
(
within
(
vaddr
,
(
unsigned
long
)
_text
,
_brk_end
))
return
0
;
#ifdef CONFIG_X86_64
/*
/*
* If the physical address is inside the kernel map, we need
* If the primary call didn't touch the high mapping already
* and the physical address is inside the kernel map, we need
* to touch the high mapped kernel as well:
* to touch the high mapped kernel as well:
*/
*/
if
(
!
within
(
cpa
->
pfn
,
highmap_start_pfn
(),
highmap_end_pfn
()))
if
(
!
within
(
vaddr
,
(
unsigned
long
)
_text
,
_brk_end
)
&&
return
0
;
within
(
cpa
->
pfn
,
highmap_start_pfn
(),
highmap_end_pfn
()))
{
unsigned
long
temp_cpa_vaddr
=
(
cpa
->
pfn
<<
PAGE_SHIFT
)
+
__START_KERNEL_map
-
phys_base
;
alias_cpa
=
*
cpa
;
alias_cpa
=
*
cpa
;
temp_cpa_vaddr
=
(
cpa
->
pfn
<<
PAGE_SHIFT
)
+
__START_KERNEL_map
-
phys_base
;
alias_cpa
.
vaddr
=
&
temp_cpa_vaddr
;
alias_cpa
.
vaddr
=
&
temp_cpa_vaddr
;
alias_cpa
.
flags
&=
~
(
CPA_PAGES_ARRAY
|
CPA_ARRAY
);
alias_cpa
.
flags
&=
~
(
CPA_PAGES_ARRAY
|
CPA_ARRAY
);
/*
/*
* The high mapping range is imprecise, so ignore the return value.
* The high mapping range is imprecise, so ignore the
* return value.
*/
*/
__change_page_attr_set_clr
(
&
alias_cpa
,
0
);
__change_page_attr_set_clr
(
&
alias_cpa
,
0
);
}
#endif
#endif
/*
* If the PMD page was partially used for per-cpu remapping,
* the recycled area needs to be split and modified. Because
* the area is always proper subset of a PMD page
* cpa->numpages is guaranteed to be 1 for these areas, so
* there's no need to loop over and check for further remaps.
*/
remapped
=
(
unsigned
long
)
pcpu_lpage_remapped
((
void
*
)
laddr
);
if
(
remapped
)
{
WARN_ON
(
cpa
->
numpages
>
1
);
alias_cpa
=
*
cpa
;
alias_cpa
.
vaddr
=
&
remapped
;
alias_cpa
.
flags
&=
~
(
CPA_PAGES_ARRAY
|
CPA_ARRAY
);
ret
=
__change_page_attr_set_clr
(
&
alias_cpa
,
0
);
if
(
ret
)
return
ret
;
return
ret
;
}
return
0
;
}
}
static
int
__change_page_attr_set_clr
(
struct
cpa_data
*
cpa
,
int
checkalias
)
static
int
__change_page_attr_set_clr
(
struct
cpa_data
*
cpa
,
int
checkalias
)
...
...
arch/xtensa/mm/fault.c
浏览文件 @
b7f797cb
...
@@ -106,7 +106,7 @@ void do_page_fault(struct pt_regs *regs)
...
@@ -106,7 +106,7 @@ void do_page_fault(struct pt_regs *regs)
* the fault.
* the fault.
*/
*/
survive:
survive:
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
is_write
);
fault
=
handle_mm_fault
(
mm
,
vma
,
address
,
is_write
?
FAULT_FLAG_WRITE
:
0
);
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
unlikely
(
fault
&
VM_FAULT_ERROR
))
{
if
(
fault
&
VM_FAULT_OOM
)
if
(
fault
&
VM_FAULT_OOM
)
goto
out_of_memory
;
goto
out_of_memory
;
...
...
drivers/crypto/padlock-aes.c
浏览文件 @
b7f797cb
...
@@ -18,9 +18,22 @@
...
@@ -18,9 +18,22 @@
#include <linux/percpu.h>
#include <linux/percpu.h>
#include <linux/smp.h>
#include <linux/smp.h>
#include <asm/byteorder.h>
#include <asm/byteorder.h>
#include <asm/processor.h>
#include <asm/i387.h>
#include <asm/i387.h>
#include "padlock.h"
#include "padlock.h"
/*
* Number of data blocks actually fetched for each xcrypt insn.
* Processors with prefetch errata will fetch extra blocks.
*/
static
unsigned
int
ecb_fetch_blocks
=
2
;
#define MAX_ECB_FETCH_BLOCKS (8)
#define ecb_fetch_bytes (ecb_fetch_blocks * AES_BLOCK_SIZE)
static
unsigned
int
cbc_fetch_blocks
=
1
;
#define MAX_CBC_FETCH_BLOCKS (4)
#define cbc_fetch_bytes (cbc_fetch_blocks * AES_BLOCK_SIZE)
/* Control word. */
/* Control word. */
struct
cword
{
struct
cword
{
unsigned
int
__attribute__
((
__packed__
))
unsigned
int
__attribute__
((
__packed__
))
...
@@ -172,73 +185,111 @@ static inline void padlock_store_cword(struct cword *cword)
...
@@ -172,73 +185,111 @@ static inline void padlock_store_cword(struct cword *cword)
* should be used only inside the irq_ts_save/restore() context
* should be used only inside the irq_ts_save/restore() context
*/
*/
static
inline
void
padlock_xcrypt
(
const
u8
*
input
,
u8
*
output
,
void
*
key
,
static
inline
void
rep_xcrypt_ecb
(
const
u8
*
input
,
u8
*
output
,
void
*
key
,
struct
cword
*
control_word
)
struct
cword
*
control_word
,
int
count
)
{
{
asm
volatile
(
".byte 0xf3,0x0f,0xa7,0xc8"
/* rep xcryptecb */
asm
volatile
(
".byte 0xf3,0x0f,0xa7,0xc8"
/* rep xcryptecb */
:
"+S"
(
input
),
"+D"
(
output
)
:
"+S"
(
input
),
"+D"
(
output
)
:
"d"
(
control_word
),
"b"
(
key
),
"c"
(
1
));
:
"d"
(
control_word
),
"b"
(
key
),
"c"
(
count
));
}
}
static
void
aes_crypt_copy
(
const
u8
*
in
,
u8
*
out
,
u32
*
key
,
struct
cword
*
cword
)
static
inline
u8
*
rep_xcrypt_cbc
(
const
u8
*
input
,
u8
*
output
,
void
*
key
,
u8
*
iv
,
struct
cword
*
control_word
,
int
count
)
{
{
u8
buf
[
AES_BLOCK_SIZE
*
2
+
PADLOCK_ALIGNMENT
-
1
];
asm
volatile
(
".byte 0xf3,0x0f,0xa7,0xd0"
/* rep xcryptcbc */
:
"+S"
(
input
),
"+D"
(
output
),
"+a"
(
iv
)
:
"d"
(
control_word
),
"b"
(
key
),
"c"
(
count
));
return
iv
;
}
static
void
ecb_crypt_copy
(
const
u8
*
in
,
u8
*
out
,
u32
*
key
,
struct
cword
*
cword
,
int
count
)
{
/*
* Padlock prefetches extra data so we must provide mapped input buffers.
* Assume there are at least 16 bytes of stack already in use.
*/
u8
buf
[
AES_BLOCK_SIZE
*
(
MAX_ECB_FETCH_BLOCKS
-
1
)
+
PADLOCK_ALIGNMENT
-
1
];
u8
*
tmp
=
PTR_ALIGN
(
&
buf
[
0
],
PADLOCK_ALIGNMENT
);
memcpy
(
tmp
,
in
,
count
*
AES_BLOCK_SIZE
);
rep_xcrypt_ecb
(
tmp
,
out
,
key
,
cword
,
count
);
}
static
u8
*
cbc_crypt_copy
(
const
u8
*
in
,
u8
*
out
,
u32
*
key
,
u8
*
iv
,
struct
cword
*
cword
,
int
count
)
{
/*
* Padlock prefetches extra data so we must provide mapped input buffers.
* Assume there are at least 16 bytes of stack already in use.
*/
u8
buf
[
AES_BLOCK_SIZE
*
(
MAX_CBC_FETCH_BLOCKS
-
1
)
+
PADLOCK_ALIGNMENT
-
1
];
u8
*
tmp
=
PTR_ALIGN
(
&
buf
[
0
],
PADLOCK_ALIGNMENT
);
u8
*
tmp
=
PTR_ALIGN
(
&
buf
[
0
],
PADLOCK_ALIGNMENT
);
memcpy
(
tmp
,
in
,
AES_BLOCK_SIZE
);
memcpy
(
tmp
,
in
,
count
*
AES_BLOCK_SIZE
);
padlock_xcrypt
(
tmp
,
out
,
key
,
cword
);
return
rep_xcrypt_cbc
(
tmp
,
out
,
key
,
iv
,
cword
,
count
);
}
}
static
inline
void
aes
_crypt
(
const
u8
*
in
,
u8
*
out
,
u32
*
key
,
static
inline
void
ecb
_crypt
(
const
u8
*
in
,
u8
*
out
,
u32
*
key
,
struct
cword
*
cword
)
struct
cword
*
cword
,
int
count
)
{
{
/* padlock_xcrypt requires at least two blocks of data. */
/* Padlock in ECB mode fetches at least ecb_fetch_bytes of data.
if
(
unlikely
(
!
(((
unsigned
long
)
in
^
(
PAGE_SIZE
-
AES_BLOCK_SIZE
))
&
* We could avoid some copying here but it's probably not worth it.
(
PAGE_SIZE
-
1
))))
{
*/
aes_crypt_copy
(
in
,
out
,
key
,
cword
);
if
(
unlikely
(((
unsigned
long
)
in
&
PAGE_SIZE
)
+
ecb_fetch_bytes
>
PAGE_SIZE
))
{
ecb_crypt_copy
(
in
,
out
,
key
,
cword
,
count
);
return
;
return
;
}
}
padlock_xcrypt
(
in
,
out
,
key
,
cword
);
rep_xcrypt_ecb
(
in
,
out
,
key
,
cword
,
count
);
}
static
inline
u8
*
cbc_crypt
(
const
u8
*
in
,
u8
*
out
,
u32
*
key
,
u8
*
iv
,
struct
cword
*
cword
,
int
count
)
{
/* Padlock in CBC mode fetches at least cbc_fetch_bytes of data. */
if
(
unlikely
(((
unsigned
long
)
in
&
PAGE_SIZE
)
+
cbc_fetch_bytes
>
PAGE_SIZE
))
return
cbc_crypt_copy
(
in
,
out
,
key
,
iv
,
cword
,
count
);
return
rep_xcrypt_cbc
(
in
,
out
,
key
,
iv
,
cword
,
count
);
}
}
static
inline
void
padlock_xcrypt_ecb
(
const
u8
*
input
,
u8
*
output
,
void
*
key
,
static
inline
void
padlock_xcrypt_ecb
(
const
u8
*
input
,
u8
*
output
,
void
*
key
,
void
*
control_word
,
u32
count
)
void
*
control_word
,
u32
count
)
{
{
if
(
count
==
1
)
{
u32
initial
=
count
&
(
ecb_fetch_blocks
-
1
);
aes_crypt
(
input
,
output
,
key
,
control_word
);
if
(
count
<
ecb_fetch_blocks
)
{
ecb_crypt
(
input
,
output
,
key
,
control_word
,
count
);
return
;
return
;
}
}
asm
volatile
(
"test $1, %%cl;"
if
(
initial
)
"je 1f;"
asm
volatile
(
".byte 0xf3,0x0f,0xa7,0xc8"
/* rep xcryptecb */
#ifndef CONFIG_X86_64
"lea -1(%%ecx), %%eax;"
"mov $1, %%ecx;"
#else
"lea -1(%%rcx), %%rax;"
"mov $1, %%rcx;"
#endif
".byte 0xf3,0x0f,0xa7,0xc8;"
/* rep xcryptecb */
#ifndef CONFIG_X86_64
"mov %%eax, %%ecx;"
#else
"mov %%rax, %%rcx;"
#endif
"1:"
".byte 0xf3,0x0f,0xa7,0xc8"
/* rep xcryptecb */
:
"+S"
(
input
),
"+D"
(
output
)
:
"+S"
(
input
),
"+D"
(
output
)
:
"d"
(
control_word
),
"b"
(
key
),
"c"
(
count
)
:
"d"
(
control_word
),
"b"
(
key
),
"c"
(
initial
));
:
"ax"
);
asm
volatile
(
".byte 0xf3,0x0f,0xa7,0xc8"
/* rep xcryptecb */
:
"+S"
(
input
),
"+D"
(
output
)
:
"d"
(
control_word
),
"b"
(
key
),
"c"
(
count
-
initial
));
}
}
static
inline
u8
*
padlock_xcrypt_cbc
(
const
u8
*
input
,
u8
*
output
,
void
*
key
,
static
inline
u8
*
padlock_xcrypt_cbc
(
const
u8
*
input
,
u8
*
output
,
void
*
key
,
u8
*
iv
,
void
*
control_word
,
u32
count
)
u8
*
iv
,
void
*
control_word
,
u32
count
)
{
{
/* rep xcryptcbc */
u32
initial
=
count
&
(
cbc_fetch_blocks
-
1
);
asm
volatile
(
".byte 0xf3,0x0f,0xa7,0xd0"
if
(
count
<
cbc_fetch_blocks
)
return
cbc_crypt
(
input
,
output
,
key
,
iv
,
control_word
,
count
);
if
(
initial
)
asm
volatile
(
".byte 0xf3,0x0f,0xa7,0xd0"
/* rep xcryptcbc */
:
"+S"
(
input
),
"+D"
(
output
),
"+a"
(
iv
)
:
"+S"
(
input
),
"+D"
(
output
),
"+a"
(
iv
)
:
"d"
(
control_word
),
"b"
(
key
),
"c"
(
count
));
:
"d"
(
control_word
),
"b"
(
key
),
"c"
(
count
));
asm
volatile
(
".byte 0xf3,0x0f,0xa7,0xd0"
/* rep xcryptcbc */
:
"+S"
(
input
),
"+D"
(
output
),
"+a"
(
iv
)
:
"d"
(
control_word
),
"b"
(
key
),
"c"
(
count
-
initial
));
return
iv
;
return
iv
;
}
}
...
@@ -249,7 +300,7 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
...
@@ -249,7 +300,7 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
padlock_reset_key
(
&
ctx
->
cword
.
encrypt
);
padlock_reset_key
(
&
ctx
->
cword
.
encrypt
);
ts_state
=
irq_ts_save
();
ts_state
=
irq_ts_save
();
aes_crypt
(
in
,
out
,
ctx
->
E
,
&
ctx
->
cword
.
encrypt
);
ecb_crypt
(
in
,
out
,
ctx
->
E
,
&
ctx
->
cword
.
encrypt
,
1
);
irq_ts_restore
(
ts_state
);
irq_ts_restore
(
ts_state
);
padlock_store_cword
(
&
ctx
->
cword
.
encrypt
);
padlock_store_cword
(
&
ctx
->
cword
.
encrypt
);
}
}
...
@@ -261,7 +312,7 @@ static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
...
@@ -261,7 +312,7 @@ static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
padlock_reset_key
(
&
ctx
->
cword
.
encrypt
);
padlock_reset_key
(
&
ctx
->
cword
.
encrypt
);
ts_state
=
irq_ts_save
();
ts_state
=
irq_ts_save
();
aes_crypt
(
in
,
out
,
ctx
->
D
,
&
ctx
->
cword
.
decrypt
);
ecb_crypt
(
in
,
out
,
ctx
->
D
,
&
ctx
->
cword
.
decrypt
,
1
);
irq_ts_restore
(
ts_state
);
irq_ts_restore
(
ts_state
);
padlock_store_cword
(
&
ctx
->
cword
.
encrypt
);
padlock_store_cword
(
&
ctx
->
cword
.
encrypt
);
}
}
...
@@ -454,6 +505,7 @@ static struct crypto_alg cbc_aes_alg = {
...
@@ -454,6 +505,7 @@ static struct crypto_alg cbc_aes_alg = {
static
int
__init
padlock_init
(
void
)
static
int
__init
padlock_init
(
void
)
{
{
int
ret
;
int
ret
;
struct
cpuinfo_x86
*
c
=
&
cpu_data
(
0
);
if
(
!
cpu_has_xcrypt
)
{
if
(
!
cpu_has_xcrypt
)
{
printk
(
KERN_NOTICE
PFX
"VIA PadLock not detected.
\n
"
);
printk
(
KERN_NOTICE
PFX
"VIA PadLock not detected.
\n
"
);
...
@@ -476,6 +528,12 @@ static int __init padlock_init(void)
...
@@ -476,6 +528,12 @@ static int __init padlock_init(void)
printk
(
KERN_NOTICE
PFX
"Using VIA PadLock ACE for AES algorithm.
\n
"
);
printk
(
KERN_NOTICE
PFX
"Using VIA PadLock ACE for AES algorithm.
\n
"
);
if
(
c
->
x86
==
6
&&
c
->
x86_model
==
15
&&
c
->
x86_mask
==
2
)
{
ecb_fetch_blocks
=
MAX_ECB_FETCH_BLOCKS
;
cbc_fetch_blocks
=
MAX_CBC_FETCH_BLOCKS
;
printk
(
KERN_NOTICE
PFX
"VIA Nano stepping 2 detected: enabling workaround.
\n
"
);
}
out:
out:
return
ret
;
return
ret
;
...
...
drivers/mmc/host/Kconfig
浏览文件 @
b7f797cb
...
@@ -94,6 +94,31 @@ config MMC_SDHCI_PLTFM
...
@@ -94,6 +94,31 @@ config MMC_SDHCI_PLTFM
If unsure, say N.
If unsure, say N.
config MMC_SDHCI_S3C
tristate "SDHCI support on Samsung S3C SoC"
depends on MMC_SDHCI && (PLAT_S3C24XX || PLAT_S3C64XX)
help
This selects the Secure Digital Host Controller Interface (SDHCI)
often referrered to as the HSMMC block in some of the Samsung S3C
range of SoC.
Note, due to the problems with DMA, the DMA support is only
available with CONFIG_EXPERIMENTAL is selected.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
config MMC_SDHCI_S3C_DMA
bool "DMA support on S3C SDHCI"
depends on MMC_SDHCI_S3C && EXPERIMENTAL
help
Enable DMA support on the Samsung S3C SDHCI glue. The DMA
has proved to be problematic if the controller encounters
certain errors, and thus should be treated with care.
YMMV.
config MMC_OMAP
config MMC_OMAP
tristate "TI OMAP Multimedia Card Interface support"
tristate "TI OMAP Multimedia Card Interface support"
depends on ARCH_OMAP
depends on ARCH_OMAP
...
@@ -265,3 +290,14 @@ config MMC_CB710
...
@@ -265,3 +290,14 @@ config MMC_CB710
This driver can also be built as a module. If so, the module
This driver can also be built as a module. If so, the module
will be called cb710-mmc.
will be called cb710-mmc.
config MMC_VIA_SDMMC
tristate "VIA SD/MMC Card Reader Driver"
depends on PCI
help
This selects the VIA SD/MMC Card Reader driver, say Y or M here.
VIA provides one multi-functional card reader which integrated into
some motherboards manufactured by VIA. This card reader supports
SD/MMC/SDHC.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
drivers/mmc/host/Makefile
浏览文件 @
b7f797cb
...
@@ -15,6 +15,7 @@ obj-$(CONFIG_MMC_SDHCI_PCI) += sdhci-pci.o
...
@@ -15,6 +15,7 @@ obj-$(CONFIG_MMC_SDHCI_PCI) += sdhci-pci.o
obj-$(CONFIG_MMC_RICOH_MMC)
+=
ricoh_mmc.o
obj-$(CONFIG_MMC_RICOH_MMC)
+=
ricoh_mmc.o
obj-$(CONFIG_MMC_SDHCI_OF)
+=
sdhci-of.o
obj-$(CONFIG_MMC_SDHCI_OF)
+=
sdhci-of.o
obj-$(CONFIG_MMC_SDHCI_PLTFM)
+=
sdhci-pltfm.o
obj-$(CONFIG_MMC_SDHCI_PLTFM)
+=
sdhci-pltfm.o
obj-$(CONFIG_MMC_SDHCI_S3C)
+=
sdhci-s3c.o
obj-$(CONFIG_MMC_WBSD)
+=
wbsd.o
obj-$(CONFIG_MMC_WBSD)
+=
wbsd.o
obj-$(CONFIG_MMC_AU1X)
+=
au1xmmc.o
obj-$(CONFIG_MMC_AU1X)
+=
au1xmmc.o
obj-$(CONFIG_MMC_OMAP)
+=
omap.o
obj-$(CONFIG_MMC_OMAP)
+=
omap.o
...
@@ -31,6 +32,7 @@ obj-$(CONFIG_MMC_S3C) += s3cmci.o
...
@@ -31,6 +32,7 @@ obj-$(CONFIG_MMC_S3C) += s3cmci.o
obj-$(CONFIG_MMC_SDRICOH_CS)
+=
sdricoh_cs.o
obj-$(CONFIG_MMC_SDRICOH_CS)
+=
sdricoh_cs.o
obj-$(CONFIG_MMC_TMIO)
+=
tmio_mmc.o
obj-$(CONFIG_MMC_TMIO)
+=
tmio_mmc.o
obj-$(CONFIG_MMC_CB710)
+=
cb710-mmc.o
obj-$(CONFIG_MMC_CB710)
+=
cb710-mmc.o
obj-$(CONFIG_MMC_VIA_SDMMC)
+=
via-sdmmc.o
ifeq
($(CONFIG_CB710_DEBUG),y)
ifeq
($(CONFIG_CB710_DEBUG),y)
CFLAGS-cb710-mmc
+=
-DDEBUG
CFLAGS-cb710-mmc
+=
-DDEBUG
...
...
drivers/mmc/host/s3cmci.c
浏览文件 @
b7f797cb
...
@@ -794,7 +794,7 @@ static void s3cmci_dma_setup(struct s3cmci_host *host,
...
@@ -794,7 +794,7 @@ static void s3cmci_dma_setup(struct s3cmci_host *host,
host
->
mem
->
start
+
host
->
sdidata
);
host
->
mem
->
start
+
host
->
sdidata
);
if
(
!
setup_ok
)
{
if
(
!
setup_ok
)
{
s3c2410_dma_config
(
host
->
dma
,
4
,
0
);
s3c2410_dma_config
(
host
->
dma
,
4
);
s3c2410_dma_set_buffdone_fn
(
host
->
dma
,
s3c2410_dma_set_buffdone_fn
(
host
->
dma
,
s3cmci_dma_done_callback
);
s3cmci_dma_done_callback
);
s3c2410_dma_setflags
(
host
->
dma
,
S3C2410_DMAF_AUTOSTART
);
s3c2410_dma_setflags
(
host
->
dma
,
S3C2410_DMAF_AUTOSTART
);
...
...
drivers/mmc/host/sdhci-of.c
浏览文件 @
b7f797cb
...
@@ -250,6 +250,9 @@ static int __devinit sdhci_of_probe(struct of_device *ofdev,
...
@@ -250,6 +250,9 @@ static int __devinit sdhci_of_probe(struct of_device *ofdev,
host
->
ops
=
&
sdhci_of_data
->
ops
;
host
->
ops
=
&
sdhci_of_data
->
ops
;
}
}
if
(
of_get_property
(
np
,
"sdhci,1-bit-only"
,
NULL
))
host
->
quirks
|=
SDHCI_QUIRK_FORCE_1_BIT_DATA
;
clk
=
of_get_property
(
np
,
"clock-frequency"
,
&
size
);
clk
=
of_get_property
(
np
,
"clock-frequency"
,
&
size
);
if
(
clk
&&
size
==
sizeof
(
*
clk
)
&&
*
clk
)
if
(
clk
&&
size
==
sizeof
(
*
clk
)
&&
*
clk
)
of_host
->
clock
=
*
clk
;
of_host
->
clock
=
*
clk
;
...
...
drivers/mmc/host/sdhci-pci.c
浏览文件 @
b7f797cb
...
@@ -284,6 +284,18 @@ static const struct sdhci_pci_fixes sdhci_jmicron = {
...
@@ -284,6 +284,18 @@ static const struct sdhci_pci_fixes sdhci_jmicron = {
.
resume
=
jmicron_resume
,
.
resume
=
jmicron_resume
,
};
};
static
int
via_probe
(
struct
sdhci_pci_chip
*
chip
)
{
if
(
chip
->
pdev
->
revision
==
0x10
)
chip
->
quirks
|=
SDHCI_QUIRK_DELAY_AFTER_POWER
;
return
0
;
}
static
const
struct
sdhci_pci_fixes
sdhci_via
=
{
.
probe
=
via_probe
,
};
static
const
struct
pci_device_id
pci_ids
[]
__devinitdata
=
{
static
const
struct
pci_device_id
pci_ids
[]
__devinitdata
=
{
{
{
.
vendor
=
PCI_VENDOR_ID_RICOH
,
.
vendor
=
PCI_VENDOR_ID_RICOH
,
...
@@ -349,6 +361,14 @@ static const struct pci_device_id pci_ids[] __devinitdata = {
...
@@ -349,6 +361,14 @@ static const struct pci_device_id pci_ids[] __devinitdata = {
.
driver_data
=
(
kernel_ulong_t
)
&
sdhci_jmicron
,
.
driver_data
=
(
kernel_ulong_t
)
&
sdhci_jmicron
,
},
},
{
.
vendor
=
PCI_VENDOR_ID_VIA
,
.
device
=
0x95d0
,
.
subvendor
=
PCI_ANY_ID
,
.
subdevice
=
PCI_ANY_ID
,
.
driver_data
=
(
kernel_ulong_t
)
&
sdhci_via
,
},
{
/* Generic SD host controller */
{
/* Generic SD host controller */
PCI_DEVICE_CLASS
((
PCI_CLASS_SYSTEM_SDHCI
<<
8
),
0xFFFF00
)
PCI_DEVICE_CLASS
((
PCI_CLASS_SYSTEM_SDHCI
<<
8
),
0xFFFF00
)
},
},
...
...
drivers/mmc/host/sdhci-s3c.c
0 → 100644
浏览文件 @
b7f797cb
/* linux/drivers/mmc/host/sdhci-s3c.c
*
* Copyright 2008 Openmoko Inc.
* Copyright 2008 Simtec Electronics
* Ben Dooks <ben@simtec.co.uk>
* http://armlinux.simtec.co.uk/
*
* SDHCI (HSMMC) support for Samsung SoC
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/delay.h>
#include <linux/dma-mapping.h>
#include <linux/platform_device.h>
#include <linux/clk.h>
#include <linux/io.h>
#include <linux/mmc/host.h>
#include <plat/sdhci.h>
#include <plat/regs-sdhci.h>
#include "sdhci.h"
#define MAX_BUS_CLK (4)
/**
* struct sdhci_s3c - S3C SDHCI instance
* @host: The SDHCI host created
* @pdev: The platform device we where created from.
* @ioarea: The resource created when we claimed the IO area.
* @pdata: The platform data for this controller.
* @cur_clk: The index of the current bus clock.
* @clk_io: The clock for the internal bus interface.
* @clk_bus: The clocks that are available for the SD/MMC bus clock.
*/
struct
sdhci_s3c
{
struct
sdhci_host
*
host
;
struct
platform_device
*
pdev
;
struct
resource
*
ioarea
;
struct
s3c_sdhci_platdata
*
pdata
;
unsigned
int
cur_clk
;
struct
clk
*
clk_io
;
struct
clk
*
clk_bus
[
MAX_BUS_CLK
];
};
static
inline
struct
sdhci_s3c
*
to_s3c
(
struct
sdhci_host
*
host
)
{
return
sdhci_priv
(
host
);
}
/**
* get_curclk - convert ctrl2 register to clock source number
* @ctrl2: Control2 register value.
*/
static
u32
get_curclk
(
u32
ctrl2
)
{
ctrl2
&=
S3C_SDHCI_CTRL2_SELBASECLK_MASK
;
ctrl2
>>=
S3C_SDHCI_CTRL2_SELBASECLK_SHIFT
;
return
ctrl2
;
}
static
void
sdhci_s3c_check_sclk
(
struct
sdhci_host
*
host
)
{
struct
sdhci_s3c
*
ourhost
=
to_s3c
(
host
);
u32
tmp
=
readl
(
host
->
ioaddr
+
S3C_SDHCI_CONTROL2
);
if
(
get_curclk
(
tmp
)
!=
ourhost
->
cur_clk
)
{
dev_dbg
(
&
ourhost
->
pdev
->
dev
,
"restored ctrl2 clock setting
\n
"
);
tmp
&=
~
S3C_SDHCI_CTRL2_SELBASECLK_MASK
;
tmp
|=
ourhost
->
cur_clk
<<
S3C_SDHCI_CTRL2_SELBASECLK_SHIFT
;
writel
(
tmp
,
host
->
ioaddr
+
0x80
);
}
}
/**
* sdhci_s3c_get_max_clk - callback to get maximum clock frequency.
* @host: The SDHCI host instance.
*
* Callback to return the maximum clock rate acheivable by the controller.
*/
static
unsigned
int
sdhci_s3c_get_max_clk
(
struct
sdhci_host
*
host
)
{
struct
sdhci_s3c
*
ourhost
=
to_s3c
(
host
);
struct
clk
*
busclk
;
unsigned
int
rate
,
max
;
int
clk
;
/* note, a reset will reset the clock source */
sdhci_s3c_check_sclk
(
host
);
for
(
max
=
0
,
clk
=
0
;
clk
<
MAX_BUS_CLK
;
clk
++
)
{
busclk
=
ourhost
->
clk_bus
[
clk
];
if
(
!
busclk
)
continue
;
rate
=
clk_get_rate
(
busclk
);
if
(
rate
>
max
)
max
=
rate
;
}
return
max
;
}
static
unsigned
int
sdhci_s3c_get_timeout_clk
(
struct
sdhci_host
*
host
)
{
return
sdhci_s3c_get_max_clk
(
host
)
/
1000000
;
}
/**
* sdhci_s3c_consider_clock - consider one the bus clocks for current setting
* @ourhost: Our SDHCI instance.
* @src: The source clock index.
* @wanted: The clock frequency wanted.
*/
static
unsigned
int
sdhci_s3c_consider_clock
(
struct
sdhci_s3c
*
ourhost
,
unsigned
int
src
,
unsigned
int
wanted
)
{
unsigned
long
rate
;
struct
clk
*
clksrc
=
ourhost
->
clk_bus
[
src
];
int
div
;
if
(
!
clksrc
)
return
UINT_MAX
;
rate
=
clk_get_rate
(
clksrc
);
for
(
div
=
1
;
div
<
256
;
div
*=
2
)
{
if
((
rate
/
div
)
<=
wanted
)
break
;
}
dev_dbg
(
&
ourhost
->
pdev
->
dev
,
"clk %d: rate %ld, want %d, got %ld
\n
"
,
src
,
rate
,
wanted
,
rate
/
div
);
return
(
wanted
-
(
rate
/
div
));
}
/**
* sdhci_s3c_set_clock - callback on clock change
* @host: The SDHCI host being changed
* @clock: The clock rate being requested.
*
* When the card's clock is going to be changed, look at the new frequency
* and find the best clock source to go with it.
*/
static
void
sdhci_s3c_set_clock
(
struct
sdhci_host
*
host
,
unsigned
int
clock
)
{
struct
sdhci_s3c
*
ourhost
=
to_s3c
(
host
);
unsigned
int
best
=
UINT_MAX
;
unsigned
int
delta
;
int
best_src
=
0
;
int
src
;
u32
ctrl
;
/* don't bother if the clock is going off. */
if
(
clock
==
0
)
return
;
for
(
src
=
0
;
src
<
MAX_BUS_CLK
;
src
++
)
{
delta
=
sdhci_s3c_consider_clock
(
ourhost
,
src
,
clock
);
if
(
delta
<
best
)
{
best
=
delta
;
best_src
=
src
;
}
}
dev_dbg
(
&
ourhost
->
pdev
->
dev
,
"selected source %d, clock %d, delta %d
\n
"
,
best_src
,
clock
,
best
);
/* select the new clock source */
if
(
ourhost
->
cur_clk
!=
best_src
)
{
struct
clk
*
clk
=
ourhost
->
clk_bus
[
best_src
];
/* turn clock off to card before changing clock source */
writew
(
0
,
host
->
ioaddr
+
SDHCI_CLOCK_CONTROL
);
ourhost
->
cur_clk
=
best_src
;
host
->
max_clk
=
clk_get_rate
(
clk
);
host
->
timeout_clk
=
sdhci_s3c_get_timeout_clk
(
host
);
ctrl
=
readl
(
host
->
ioaddr
+
S3C_SDHCI_CONTROL2
);
ctrl
&=
~
S3C_SDHCI_CTRL2_SELBASECLK_MASK
;
ctrl
|=
best_src
<<
S3C_SDHCI_CTRL2_SELBASECLK_SHIFT
;
writel
(
ctrl
,
host
->
ioaddr
+
S3C_SDHCI_CONTROL2
);
}
/* reconfigure the hardware for new clock rate */
{
struct
mmc_ios
ios
;
ios
.
clock
=
clock
;
if
(
ourhost
->
pdata
->
cfg_card
)
(
ourhost
->
pdata
->
cfg_card
)(
ourhost
->
pdev
,
host
->
ioaddr
,
&
ios
,
NULL
);
}
}
static
struct
sdhci_ops
sdhci_s3c_ops
=
{
.
get_max_clock
=
sdhci_s3c_get_max_clk
,
.
get_timeout_clock
=
sdhci_s3c_get_timeout_clk
,
.
set_clock
=
sdhci_s3c_set_clock
,
};
static
int
__devinit
sdhci_s3c_probe
(
struct
platform_device
*
pdev
)
{
struct
s3c_sdhci_platdata
*
pdata
=
pdev
->
dev
.
platform_data
;
struct
device
*
dev
=
&
pdev
->
dev
;
struct
sdhci_host
*
host
;
struct
sdhci_s3c
*
sc
;
struct
resource
*
res
;
int
ret
,
irq
,
ptr
,
clks
;
if
(
!
pdata
)
{
dev_err
(
dev
,
"no device data specified
\n
"
);
return
-
ENOENT
;
}
irq
=
platform_get_irq
(
pdev
,
0
);
if
(
irq
<
0
)
{
dev_err
(
dev
,
"no irq specified
\n
"
);
return
irq
;
}
res
=
platform_get_resource
(
pdev
,
IORESOURCE_MEM
,
0
);
if
(
!
res
)
{
dev_err
(
dev
,
"no memory specified
\n
"
);
return
-
ENOENT
;
}
host
=
sdhci_alloc_host
(
dev
,
sizeof
(
struct
sdhci_s3c
));
if
(
IS_ERR
(
host
))
{
dev_err
(
dev
,
"sdhci_alloc_host() failed
\n
"
);
return
PTR_ERR
(
host
);
}
sc
=
sdhci_priv
(
host
);
sc
->
host
=
host
;
sc
->
pdev
=
pdev
;
sc
->
pdata
=
pdata
;
platform_set_drvdata
(
pdev
,
host
);
sc
->
clk_io
=
clk_get
(
dev
,
"hsmmc"
);
if
(
IS_ERR
(
sc
->
clk_io
))
{
dev_err
(
dev
,
"failed to get io clock
\n
"
);
ret
=
PTR_ERR
(
sc
->
clk_io
);
goto
err_io_clk
;
}
/* enable the local io clock and keep it running for the moment. */
clk_enable
(
sc
->
clk_io
);
for
(
clks
=
0
,
ptr
=
0
;
ptr
<
MAX_BUS_CLK
;
ptr
++
)
{
struct
clk
*
clk
;
char
*
name
=
pdata
->
clocks
[
ptr
];
if
(
name
==
NULL
)
continue
;
clk
=
clk_get
(
dev
,
name
);
if
(
IS_ERR
(
clk
))
{
dev_err
(
dev
,
"failed to get clock %s
\n
"
,
name
);
continue
;
}
clks
++
;
sc
->
clk_bus
[
ptr
]
=
clk
;
clk_enable
(
clk
);
dev_info
(
dev
,
"clock source %d: %s (%ld Hz)
\n
"
,
ptr
,
name
,
clk_get_rate
(
clk
));
}
if
(
clks
==
0
)
{
dev_err
(
dev
,
"failed to find any bus clocks
\n
"
);
ret
=
-
ENOENT
;
goto
err_no_busclks
;
}
sc
->
ioarea
=
request_mem_region
(
res
->
start
,
resource_size
(
res
),
mmc_hostname
(
host
->
mmc
));
if
(
!
sc
->
ioarea
)
{
dev_err
(
dev
,
"failed to reserve register area
\n
"
);
ret
=
-
ENXIO
;
goto
err_req_regs
;
}
host
->
ioaddr
=
ioremap_nocache
(
res
->
start
,
resource_size
(
res
));
if
(
!
host
->
ioaddr
)
{
dev_err
(
dev
,
"failed to map registers
\n
"
);
ret
=
-
ENXIO
;
goto
err_req_regs
;
}
/* Ensure we have minimal gpio selected CMD/CLK/Detect */
if
(
pdata
->
cfg_gpio
)
pdata
->
cfg_gpio
(
pdev
,
pdata
->
max_width
);
host
->
hw_name
=
"samsung-hsmmc"
;
host
->
ops
=
&
sdhci_s3c_ops
;
host
->
quirks
=
0
;
host
->
irq
=
irq
;
/* Setup quirks for the controller */
/* Currently with ADMA enabled we are getting some length
* interrupts that are not being dealt with, do disable
* ADMA until this is sorted out. */
host
->
quirks
|=
SDHCI_QUIRK_BROKEN_ADMA
;
host
->
quirks
|=
SDHCI_QUIRK_32BIT_ADMA_SIZE
;
#ifndef CONFIG_MMC_SDHCI_S3C_DMA
/* we currently see overruns on errors, so disable the SDMA
* support as well. */
host
->
quirks
|=
SDHCI_QUIRK_BROKEN_DMA
;
/* PIO currently has problems with multi-block IO */
host
->
quirks
|=
SDHCI_QUIRK_NO_MULTIBLOCK
;
#endif
/* CONFIG_MMC_SDHCI_S3C_DMA */
/* It seems we do not get an DATA transfer complete on non-busy
* transfers, not sure if this is a problem with this specific
* SDHCI block, or a missing configuration that needs to be set. */
host
->
quirks
|=
SDHCI_QUIRK_NO_BUSY_IRQ
;
host
->
quirks
|=
(
SDHCI_QUIRK_32BIT_DMA_ADDR
|
SDHCI_QUIRK_32BIT_DMA_SIZE
);
ret
=
sdhci_add_host
(
host
);
if
(
ret
)
{
dev_err
(
dev
,
"sdhci_add_host() failed
\n
"
);
goto
err_add_host
;
}
return
0
;
err_add_host:
release_resource
(
sc
->
ioarea
);
kfree
(
sc
->
ioarea
);
err_req_regs:
for
(
ptr
=
0
;
ptr
<
MAX_BUS_CLK
;
ptr
++
)
{
clk_disable
(
sc
->
clk_bus
[
ptr
]);
clk_put
(
sc
->
clk_bus
[
ptr
]);
}
err_no_busclks:
clk_disable
(
sc
->
clk_io
);
clk_put
(
sc
->
clk_io
);
err_io_clk:
sdhci_free_host
(
host
);
return
ret
;
}
static
int
__devexit
sdhci_s3c_remove
(
struct
platform_device
*
pdev
)
{
return
0
;
}
#ifdef CONFIG_PM
static
int
sdhci_s3c_suspend
(
struct
platform_device
*
dev
,
pm_message_t
pm
)
{
struct
sdhci_host
*
host
=
platform_get_drvdata
(
dev
);
sdhci_suspend_host
(
host
,
pm
);
return
0
;
}
static
int
sdhci_s3c_resume
(
struct
platform_device
*
dev
)
{
struct
sdhci_host
*
host
=
platform_get_drvdata
(
dev
);
sdhci_resume_host
(
host
);
return
0
;
}
#else
#define sdhci_s3c_suspend NULL
#define sdhci_s3c_resume NULL
#endif
static
struct
platform_driver
sdhci_s3c_driver
=
{
.
probe
=
sdhci_s3c_probe
,
.
remove
=
__devexit_p
(
sdhci_s3c_remove
),
.
suspend
=
sdhci_s3c_suspend
,
.
resume
=
sdhci_s3c_resume
,
.
driver
=
{
.
owner
=
THIS_MODULE
,
.
name
=
"s3c-sdhci"
,
},
};
static
int
__init
sdhci_s3c_init
(
void
)
{
return
platform_driver_register
(
&
sdhci_s3c_driver
);
}
static
void
__exit
sdhci_s3c_exit
(
void
)
{
platform_driver_unregister
(
&
sdhci_s3c_driver
);
}
module_init
(
sdhci_s3c_init
);
module_exit
(
sdhci_s3c_exit
);
MODULE_DESCRIPTION
(
"Samsung SDHCI (HSMMC) glue"
);
MODULE_AUTHOR
(
"Ben Dooks, <ben@simtec.co.uk>"
);
MODULE_LICENSE
(
"GPL v2"
);
MODULE_ALIAS
(
"platform:s3c-sdhci"
);
drivers/mmc/host/sdhci.c
浏览文件 @
b7f797cb
...
@@ -584,7 +584,7 @@ static u8 sdhci_calc_timeout(struct sdhci_host *host, struct mmc_data *data)
...
@@ -584,7 +584,7 @@ static u8 sdhci_calc_timeout(struct sdhci_host *host, struct mmc_data *data)
* longer to time out, but that's much better than having a too-short
* longer to time out, but that's much better than having a too-short
* timeout value.
* timeout value.
*/
*/
if
(
(
host
->
quirks
&
SDHCI_QUIRK_BROKEN_TIMEOUT_VAL
)
)
if
(
host
->
quirks
&
SDHCI_QUIRK_BROKEN_TIMEOUT_VAL
)
return
0xE
;
return
0xE
;
/* timeout in us */
/* timeout in us */
...
@@ -1051,12 +1051,19 @@ static void sdhci_set_power(struct sdhci_host *host, unsigned short power)
...
@@ -1051,12 +1051,19 @@ static void sdhci_set_power(struct sdhci_host *host, unsigned short power)
* At least the Marvell CaFe chip gets confused if we set the voltage
* At least the Marvell CaFe chip gets confused if we set the voltage
* and set turn on power at the same time, so set the voltage first.
* and set turn on power at the same time, so set the voltage first.
*/
*/
if
(
(
host
->
quirks
&
SDHCI_QUIRK_NO_SIMULT_VDD_AND_POWER
)
)
if
(
host
->
quirks
&
SDHCI_QUIRK_NO_SIMULT_VDD_AND_POWER
)
sdhci_writeb
(
host
,
pwr
,
SDHCI_POWER_CONTROL
);
sdhci_writeb
(
host
,
pwr
,
SDHCI_POWER_CONTROL
);
pwr
|=
SDHCI_POWER_ON
;
pwr
|=
SDHCI_POWER_ON
;
sdhci_writeb
(
host
,
pwr
,
SDHCI_POWER_CONTROL
);
sdhci_writeb
(
host
,
pwr
,
SDHCI_POWER_CONTROL
);
/*
* Some controllers need an extra 10ms delay of 10ms before they
* can apply clock after applying power
*/
if
(
host
->
quirks
&
SDHCI_QUIRK_DELAY_AFTER_POWER
)
mdelay
(
10
);
}
}
/*****************************************************************************\
/*****************************************************************************\
...
@@ -1382,6 +1389,35 @@ static void sdhci_cmd_irq(struct sdhci_host *host, u32 intmask)
...
@@ -1382,6 +1389,35 @@ static void sdhci_cmd_irq(struct sdhci_host *host, u32 intmask)
sdhci_finish_command
(
host
);
sdhci_finish_command
(
host
);
}
}
#ifdef DEBUG
static
void
sdhci_show_adma_error
(
struct
sdhci_host
*
host
)
{
const
char
*
name
=
mmc_hostname
(
host
->
mmc
);
u8
*
desc
=
host
->
adma_desc
;
__le32
*
dma
;
__le16
*
len
;
u8
attr
;
sdhci_dumpregs
(
host
);
while
(
true
)
{
dma
=
(
__le32
*
)(
desc
+
4
);
len
=
(
__le16
*
)(
desc
+
2
);
attr
=
*
desc
;
DBG
(
"%s: %p: DMA 0x%08x, LEN 0x%04x, Attr=0x%02x
\n
"
,
name
,
desc
,
le32_to_cpu
(
*
dma
),
le16_to_cpu
(
*
len
),
attr
);
desc
+=
8
;
if
(
attr
&
2
)
break
;
}
}
#else
static
void
sdhci_show_adma_error
(
struct
sdhci_host
*
host
)
{
}
#endif
static
void
sdhci_data_irq
(
struct
sdhci_host
*
host
,
u32
intmask
)
static
void
sdhci_data_irq
(
struct
sdhci_host
*
host
,
u32
intmask
)
{
{
BUG_ON
(
intmask
==
0
);
BUG_ON
(
intmask
==
0
);
...
@@ -1411,8 +1447,11 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask)
...
@@ -1411,8 +1447,11 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask)
host
->
data
->
error
=
-
ETIMEDOUT
;
host
->
data
->
error
=
-
ETIMEDOUT
;
else
if
(
intmask
&
(
SDHCI_INT_DATA_CRC
|
SDHCI_INT_DATA_END_BIT
))
else
if
(
intmask
&
(
SDHCI_INT_DATA_CRC
|
SDHCI_INT_DATA_END_BIT
))
host
->
data
->
error
=
-
EILSEQ
;
host
->
data
->
error
=
-
EILSEQ
;
else
if
(
intmask
&
SDHCI_INT_ADMA_ERROR
)
else
if
(
intmask
&
SDHCI_INT_ADMA_ERROR
)
{
printk
(
KERN_ERR
"%s: ADMA error
\n
"
,
mmc_hostname
(
host
->
mmc
));
sdhci_show_adma_error
(
host
);
host
->
data
->
error
=
-
EIO
;
host
->
data
->
error
=
-
EIO
;
}
if
(
host
->
data
->
error
)
if
(
host
->
data
->
error
)
sdhci_finish_data
(
host
);
sdhci_finish_data
(
host
);
...
@@ -1729,7 +1768,10 @@ int sdhci_add_host(struct sdhci_host *host)
...
@@ -1729,7 +1768,10 @@ int sdhci_add_host(struct sdhci_host *host)
mmc
->
ops
=
&
sdhci_ops
;
mmc
->
ops
=
&
sdhci_ops
;
mmc
->
f_min
=
host
->
max_clk
/
256
;
mmc
->
f_min
=
host
->
max_clk
/
256
;
mmc
->
f_max
=
host
->
max_clk
;
mmc
->
f_max
=
host
->
max_clk
;
mmc
->
caps
=
MMC_CAP_4_BIT_DATA
|
MMC_CAP_SDIO_IRQ
;
mmc
->
caps
=
MMC_CAP_SDIO_IRQ
;
if
(
!
(
host
->
quirks
&
SDHCI_QUIRK_FORCE_1_BIT_DATA
))
mmc
->
caps
|=
MMC_CAP_4_BIT_DATA
;
if
(
caps
&
SDHCI_CAN_DO_HISPD
)
if
(
caps
&
SDHCI_CAN_DO_HISPD
)
mmc
->
caps
|=
MMC_CAP_SD_HIGHSPEED
;
mmc
->
caps
|=
MMC_CAP_SD_HIGHSPEED
;
...
@@ -1802,7 +1844,7 @@ int sdhci_add_host(struct sdhci_host *host)
...
@@ -1802,7 +1844,7 @@ int sdhci_add_host(struct sdhci_host *host)
/*
/*
* Maximum block count.
* Maximum block count.
*/
*/
mmc
->
max_blk_count
=
65535
;
mmc
->
max_blk_count
=
(
host
->
quirks
&
SDHCI_QUIRK_NO_MULTIBLOCK
)
?
1
:
65535
;
/*
/*
* Init tasklets.
* Init tasklets.
...
...
drivers/mmc/host/sdhci.h
浏览文件 @
b7f797cb
...
@@ -226,6 +226,12 @@ struct sdhci_host {
...
@@ -226,6 +226,12 @@ struct sdhci_host {
#define SDHCI_QUIRK_RESTORE_IRQS_AFTER_RESET (1<<19)
#define SDHCI_QUIRK_RESTORE_IRQS_AFTER_RESET (1<<19)
/* Controller has to be forced to use block size of 2048 bytes */
/* Controller has to be forced to use block size of 2048 bytes */
#define SDHCI_QUIRK_FORCE_BLK_SZ_2048 (1<<20)
#define SDHCI_QUIRK_FORCE_BLK_SZ_2048 (1<<20)
/* Controller cannot do multi-block transfers */
#define SDHCI_QUIRK_NO_MULTIBLOCK (1<<21)
/* Controller can only handle 1-bit data transfers */
#define SDHCI_QUIRK_FORCE_1_BIT_DATA (1<<22)
/* Controller needs 10ms delay between applying power and clock */
#define SDHCI_QUIRK_DELAY_AFTER_POWER (1<<23)
int
irq
;
/* Device IRQ */
int
irq
;
/* Device IRQ */
void
__iomem
*
ioaddr
;
/* Mapped address */
void
__iomem
*
ioaddr
;
/* Mapped address */
...
...
drivers/mmc/host/via-sdmmc.c
0 → 100644
浏览文件 @
b7f797cb
/*
* drivers/mmc/host/via-sdmmc.c - VIA SD/MMC Card Reader driver
* Copyright (c) 2008, VIA Technologies Inc. All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or (at
* your option) any later version.
*/
#include <linux/pci.h>
#include <linux/dma-mapping.h>
#include <linux/highmem.h>
#include <linux/delay.h>
#include <linux/mmc/host.h>
#define DRV_NAME "via_sdmmc"
#define PCI_DEVICE_ID_VIA_9530 0x9530
#define VIA_CRDR_SDC_OFF 0x200
#define VIA_CRDR_DDMA_OFF 0x400
#define VIA_CRDR_PCICTRL_OFF 0x600
#define VIA_CRDR_MIN_CLOCK 375000
#define VIA_CRDR_MAX_CLOCK 48000000
/*
* PCI registers
*/
#define VIA_CRDR_PCI_WORK_MODE 0x40
#define VIA_CRDR_PCI_DBG_MODE 0x41
/*
* SDC MMIO Registers
*/
#define VIA_CRDR_SDCTRL 0x0
#define VIA_CRDR_SDCTRL_START 0x01
#define VIA_CRDR_SDCTRL_WRITE 0x04
#define VIA_CRDR_SDCTRL_SINGLE_WR 0x10
#define VIA_CRDR_SDCTRL_SINGLE_RD 0x20
#define VIA_CRDR_SDCTRL_MULTI_WR 0x30
#define VIA_CRDR_SDCTRL_MULTI_RD 0x40
#define VIA_CRDR_SDCTRL_STOP 0x70
#define VIA_CRDR_SDCTRL_RSP_NONE 0x0
#define VIA_CRDR_SDCTRL_RSP_R1 0x10000
#define VIA_CRDR_SDCTRL_RSP_R2 0x20000
#define VIA_CRDR_SDCTRL_RSP_R3 0x30000
#define VIA_CRDR_SDCTRL_RSP_R1B 0x90000
#define VIA_CRDR_SDCARG 0x4
#define VIA_CRDR_SDBUSMODE 0x8
#define VIA_CRDR_SDMODE_4BIT 0x02
#define VIA_CRDR_SDMODE_CLK_ON 0x40
#define VIA_CRDR_SDBLKLEN 0xc
/*
* Bit 0 -Bit 10 : Block length. So, the maximum block length should be 2048.
* Bit 11 - Bit 13 : Reserved.
* GPIDET : Select GPI pin to detect card, GPI means CR_CD# in top design.
* INTEN : Enable SD host interrupt.
* Bit 16 - Bit 31 : Block count. So, the maximun block count should be 65536.
*/
#define VIA_CRDR_SDBLKLEN_GPIDET 0x2000
#define VIA_CRDR_SDBLKLEN_INTEN 0x8000
#define VIA_CRDR_MAX_BLOCK_COUNT 65536
#define VIA_CRDR_MAX_BLOCK_LENGTH 2048
#define VIA_CRDR_SDRESP0 0x10
#define VIA_CRDR_SDRESP1 0x14
#define VIA_CRDR_SDRESP2 0x18
#define VIA_CRDR_SDRESP3 0x1c
#define VIA_CRDR_SDCURBLKCNT 0x20
#define VIA_CRDR_SDINTMASK 0x24
/*
* MBDIE : Multiple Blocks transfer Done Interrupt Enable
* BDDIE : Block Data transfer Done Interrupt Enable
* CIRIE : Card Insertion or Removal Interrupt Enable
* CRDIE : Command-Response transfer Done Interrupt Enable
* CRTOIE : Command-Response response TimeOut Interrupt Enable
* ASCRDIE : Auto Stop Command-Response transfer Done Interrupt Enable
* DTIE : Data access Timeout Interrupt Enable
* SCIE : reSponse CRC error Interrupt Enable
* RCIE : Read data CRC error Interrupt Enable
* WCIE : Write data CRC error Interrupt Enable
*/
#define VIA_CRDR_SDINTMASK_MBDIE 0x10
#define VIA_CRDR_SDINTMASK_BDDIE 0x20
#define VIA_CRDR_SDINTMASK_CIRIE 0x80
#define VIA_CRDR_SDINTMASK_CRDIE 0x200
#define VIA_CRDR_SDINTMASK_CRTOIE 0x400
#define VIA_CRDR_SDINTMASK_ASCRDIE 0x800
#define VIA_CRDR_SDINTMASK_DTIE 0x1000
#define VIA_CRDR_SDINTMASK_SCIE 0x2000
#define VIA_CRDR_SDINTMASK_RCIE 0x4000
#define VIA_CRDR_SDINTMASK_WCIE 0x8000
#define VIA_CRDR_SDACTIVE_INTMASK \
(VIA_CRDR_SDINTMASK_MBDIE | VIA_CRDR_SDINTMASK_CIRIE \
| VIA_CRDR_SDINTMASK_CRDIE | VIA_CRDR_SDINTMASK_CRTOIE \
| VIA_CRDR_SDINTMASK_DTIE | VIA_CRDR_SDINTMASK_SCIE \
| VIA_CRDR_SDINTMASK_RCIE | VIA_CRDR_SDINTMASK_WCIE)
#define VIA_CRDR_SDSTATUS 0x28
/*
* CECC : Reserved
* WP : SD card Write Protect status
* SLOTD : Reserved
* SLOTG : SD SLOT status(Gpi pin status)
* MBD : Multiple Blocks transfer Done interrupt status
* BDD : Block Data transfer Done interrupt status
* CD : Reserved
* CIR : Card Insertion or Removal interrupt detected on GPI pin
* IO : Reserved
* CRD : Command-Response transfer Done interrupt status
* CRTO : Command-Response response TimeOut interrupt status
* ASCRDIE : Auto Stop Command-Response transfer Done interrupt status
* DT : Data access Timeout interrupt status
* SC : reSponse CRC error interrupt status
* RC : Read data CRC error interrupt status
* WC : Write data CRC error interrupt status
*/
#define VIA_CRDR_SDSTS_CECC 0x01
#define VIA_CRDR_SDSTS_WP 0x02
#define VIA_CRDR_SDSTS_SLOTD 0x04
#define VIA_CRDR_SDSTS_SLOTG 0x08
#define VIA_CRDR_SDSTS_MBD 0x10
#define VIA_CRDR_SDSTS_BDD 0x20
#define VIA_CRDR_SDSTS_CD 0x40
#define VIA_CRDR_SDSTS_CIR 0x80
#define VIA_CRDR_SDSTS_IO 0x100
#define VIA_CRDR_SDSTS_CRD 0x200
#define VIA_CRDR_SDSTS_CRTO 0x400
#define VIA_CRDR_SDSTS_ASCRDIE 0x800
#define VIA_CRDR_SDSTS_DT 0x1000
#define VIA_CRDR_SDSTS_SC 0x2000
#define VIA_CRDR_SDSTS_RC 0x4000
#define VIA_CRDR_SDSTS_WC 0x8000
#define VIA_CRDR_SDSTS_IGN_MASK\
(VIA_CRDR_SDSTS_BDD | VIA_CRDR_SDSTS_ASCRDIE | VIA_CRDR_SDSTS_IO)
#define VIA_CRDR_SDSTS_INT_MASK \
(VIA_CRDR_SDSTS_MBD | VIA_CRDR_SDSTS_BDD | VIA_CRDR_SDSTS_CD \
| VIA_CRDR_SDSTS_CIR | VIA_CRDR_SDSTS_IO | VIA_CRDR_SDSTS_CRD \
| VIA_CRDR_SDSTS_CRTO | VIA_CRDR_SDSTS_ASCRDIE | VIA_CRDR_SDSTS_DT \
| VIA_CRDR_SDSTS_SC | VIA_CRDR_SDSTS_RC | VIA_CRDR_SDSTS_WC)
#define VIA_CRDR_SDSTS_W1C_MASK \
(VIA_CRDR_SDSTS_CECC | VIA_CRDR_SDSTS_MBD | VIA_CRDR_SDSTS_BDD \
| VIA_CRDR_SDSTS_CD | VIA_CRDR_SDSTS_CIR | VIA_CRDR_SDSTS_CRD \
| VIA_CRDR_SDSTS_CRTO | VIA_CRDR_SDSTS_ASCRDIE | VIA_CRDR_SDSTS_DT \
| VIA_CRDR_SDSTS_SC | VIA_CRDR_SDSTS_RC | VIA_CRDR_SDSTS_WC)
#define VIA_CRDR_SDSTS_CMD_MASK \
(VIA_CRDR_SDSTS_CRD | VIA_CRDR_SDSTS_CRTO | VIA_CRDR_SDSTS_SC)
#define VIA_CRDR_SDSTS_DATA_MASK\
(VIA_CRDR_SDSTS_MBD | VIA_CRDR_SDSTS_DT \
| VIA_CRDR_SDSTS_RC | VIA_CRDR_SDSTS_WC)
#define VIA_CRDR_SDSTATUS2 0x2a
/*
* CFE : Enable SD host automatic Clock FReezing
*/
#define VIA_CRDR_SDSTS_CFE 0x80
#define VIA_CRDR_SDRSPTMO 0x2C
#define VIA_CRDR_SDCLKSEL 0x30
#define VIA_CRDR_SDEXTCTRL 0x34
#define VIS_CRDR_SDEXTCTRL_AUTOSTOP_SD 0x01
#define VIS_CRDR_SDEXTCTRL_SHIFT_9 0x02
#define VIS_CRDR_SDEXTCTRL_MMC_8BIT 0x04
#define VIS_CRDR_SDEXTCTRL_RELD_BLK 0x08
#define VIS_CRDR_SDEXTCTRL_BAD_CMDA 0x10
#define VIS_CRDR_SDEXTCTRL_BAD_DATA 0x20
#define VIS_CRDR_SDEXTCTRL_AUTOSTOP_SPI 0x40
#define VIA_CRDR_SDEXTCTRL_HISPD 0x80
/* 0x38-0xFF reserved */
/*
* Data DMA Control Registers
*/
#define VIA_CRDR_DMABASEADD 0x0
#define VIA_CRDR_DMACOUNTER 0x4
#define VIA_CRDR_DMACTRL 0x8
/*
* DIR :Transaction Direction
* 0 : From card to memory
* 1 : From memory to card
*/
#define VIA_CRDR_DMACTRL_DIR 0x100
#define VIA_CRDR_DMACTRL_ENIRQ 0x10000
#define VIA_CRDR_DMACTRL_SFTRST 0x1000000
#define VIA_CRDR_DMASTS 0xc
#define VIA_CRDR_DMASTART 0x10
/*0x14-0xFF reserved*/
/*
* PCI Control Registers
*/
/*0x0 - 0x1 reserved*/
#define VIA_CRDR_PCICLKGATT 0x2
/*
* SFTRST :
* 0 : Soft reset all the controller and it will be de-asserted automatically
* 1 : Soft reset is de-asserted
*/
#define VIA_CRDR_PCICLKGATT_SFTRST 0x01
/*
* 3V3 : Pad power select
* 0 : 1.8V
* 1 : 3.3V
* NOTE : No mater what the actual value should be, this bit always
* read as 0. This is a hardware bug.
*/
#define VIA_CRDR_PCICLKGATT_3V3 0x10
/*
* PAD_PWRON : Pad Power on/off select
* 0 : Power off
* 1 : Power on
* NOTE : No mater what the actual value should be, this bit always
* read as 0. This is a hardware bug.
*/
#define VIA_CRDR_PCICLKGATT_PAD_PWRON 0x20
#define VIA_CRDR_PCISDCCLK 0x5
#define VIA_CRDR_PCIDMACLK 0x7
#define VIA_CRDR_PCIDMACLK_SDC 0x2
#define VIA_CRDR_PCIINTCTRL 0x8
#define VIA_CRDR_PCIINTCTRL_SDCIRQEN 0x04
#define VIA_CRDR_PCIINTSTATUS 0x9
#define VIA_CRDR_PCIINTSTATUS_SDC 0x04
#define VIA_CRDR_PCITMOCTRL 0xa
#define VIA_CRDR_PCITMOCTRL_NO 0x0
#define VIA_CRDR_PCITMOCTRL_32US 0x1
#define VIA_CRDR_PCITMOCTRL_256US 0x2
#define VIA_CRDR_PCITMOCTRL_1024US 0x3
#define VIA_CRDR_PCITMOCTRL_256MS 0x4
#define VIA_CRDR_PCITMOCTRL_512MS 0x5
#define VIA_CRDR_PCITMOCTRL_1024MS 0x6
/*0xB-0xFF reserved*/
enum
PCI_HOST_CLK_CONTROL
{
PCI_CLK_375K
=
0x03
,
PCI_CLK_8M
=
0x04
,
PCI_CLK_12M
=
0x00
,
PCI_CLK_16M
=
0x05
,
PCI_CLK_24M
=
0x01
,
PCI_CLK_33M
=
0x06
,
PCI_CLK_48M
=
0x02
};
struct
sdhcreg
{
u32
sdcontrol_reg
;
u32
sdcmdarg_reg
;
u32
sdbusmode_reg
;
u32
sdblklen_reg
;
u32
sdresp_reg
[
4
];
u32
sdcurblkcnt_reg
;
u32
sdintmask_reg
;
u32
sdstatus_reg
;
u32
sdrsptmo_reg
;
u32
sdclksel_reg
;
u32
sdextctrl_reg
;
};
struct
pcictrlreg
{
u8
reserve
[
2
];
u8
pciclkgat_reg
;
u8
pcinfcclk_reg
;
u8
pcimscclk_reg
;
u8
pcisdclk_reg
;
u8
pcicaclk_reg
;
u8
pcidmaclk_reg
;
u8
pciintctrl_reg
;
u8
pciintstatus_reg
;
u8
pcitmoctrl_reg
;
u8
Resv
;
};
struct
via_crdr_mmc_host
{
struct
mmc_host
*
mmc
;
struct
mmc_request
*
mrq
;
struct
mmc_command
*
cmd
;
struct
mmc_data
*
data
;
void
__iomem
*
mmiobase
;
void
__iomem
*
sdhc_mmiobase
;
void
__iomem
*
ddma_mmiobase
;
void
__iomem
*
pcictrl_mmiobase
;
struct
pcictrlreg
pm_pcictrl_reg
;
struct
sdhcreg
pm_sdhc_reg
;
struct
work_struct
carddet_work
;
struct
tasklet_struct
finish_tasklet
;
struct
timer_list
timer
;
spinlock_t
lock
;
u8
power
;
int
reject
;
unsigned
int
quirks
;
};
/* some devices need a very long delay for power to stabilize */
#define VIA_CRDR_QUIRK_300MS_PWRDELAY 0x0001
static
struct
pci_device_id
via_ids
[]
=
{
{
PCI_VENDOR_ID_VIA
,
PCI_DEVICE_ID_VIA_9530
,
PCI_ANY_ID
,
PCI_ANY_ID
,
0
,
0
,
0
,},
{
0
,}
};
MODULE_DEVICE_TABLE
(
pci
,
via_ids
);
static
void
via_print_sdchc
(
struct
via_crdr_mmc_host
*
host
)
{
void
__iomem
*
addrbase
=
host
->
sdhc_mmiobase
;
pr_debug
(
"SDC MMIO Registers:
\n
"
);
pr_debug
(
"SDCONTROL=%08x, SDCMDARG=%08x, SDBUSMODE=%08x
\n
"
,
readl
(
addrbase
+
VIA_CRDR_SDCTRL
),
readl
(
addrbase
+
VIA_CRDR_SDCARG
),
readl
(
addrbase
+
VIA_CRDR_SDBUSMODE
));
pr_debug
(
"SDBLKLEN=%08x, SDCURBLKCNT=%08x, SDINTMASK=%08x
\n
"
,
readl
(
addrbase
+
VIA_CRDR_SDBLKLEN
),
readl
(
addrbase
+
VIA_CRDR_SDCURBLKCNT
),
readl
(
addrbase
+
VIA_CRDR_SDINTMASK
));
pr_debug
(
"SDSTATUS=%08x, SDCLKSEL=%08x, SDEXTCTRL=%08x
\n
"
,
readl
(
addrbase
+
VIA_CRDR_SDSTATUS
),
readl
(
addrbase
+
VIA_CRDR_SDCLKSEL
),
readl
(
addrbase
+
VIA_CRDR_SDEXTCTRL
));
}
static
void
via_print_pcictrl
(
struct
via_crdr_mmc_host
*
host
)
{
void
__iomem
*
addrbase
=
host
->
pcictrl_mmiobase
;
pr_debug
(
"PCI Control Registers:
\n
"
);
pr_debug
(
"PCICLKGATT=%02x, PCISDCCLK=%02x, PCIDMACLK=%02x
\n
"
,
readb
(
addrbase
+
VIA_CRDR_PCICLKGATT
),
readb
(
addrbase
+
VIA_CRDR_PCISDCCLK
),
readb
(
addrbase
+
VIA_CRDR_PCIDMACLK
));
pr_debug
(
"PCIINTCTRL=%02x, PCIINTSTATUS=%02x
\n
"
,
readb
(
addrbase
+
VIA_CRDR_PCIINTCTRL
),
readb
(
addrbase
+
VIA_CRDR_PCIINTSTATUS
));
}
static
void
via_save_pcictrlreg
(
struct
via_crdr_mmc_host
*
host
)
{
struct
pcictrlreg
*
pm_pcictrl_reg
;
void
__iomem
*
addrbase
;
pm_pcictrl_reg
=
&
(
host
->
pm_pcictrl_reg
);
addrbase
=
host
->
pcictrl_mmiobase
;
pm_pcictrl_reg
->
pciclkgat_reg
=
readb
(
addrbase
+
VIA_CRDR_PCICLKGATT
);
pm_pcictrl_reg
->
pciclkgat_reg
|=
VIA_CRDR_PCICLKGATT_3V3
|
VIA_CRDR_PCICLKGATT_PAD_PWRON
;
pm_pcictrl_reg
->
pcisdclk_reg
=
readb
(
addrbase
+
VIA_CRDR_PCISDCCLK
);
pm_pcictrl_reg
->
pcidmaclk_reg
=
readb
(
addrbase
+
VIA_CRDR_PCIDMACLK
);
pm_pcictrl_reg
->
pciintctrl_reg
=
readb
(
addrbase
+
VIA_CRDR_PCIINTCTRL
);
pm_pcictrl_reg
->
pciintstatus_reg
=
readb
(
addrbase
+
VIA_CRDR_PCIINTSTATUS
);
pm_pcictrl_reg
->
pcitmoctrl_reg
=
readb
(
addrbase
+
VIA_CRDR_PCITMOCTRL
);
}
static
void
via_restore_pcictrlreg
(
struct
via_crdr_mmc_host
*
host
)
{
struct
pcictrlreg
*
pm_pcictrl_reg
;
void
__iomem
*
addrbase
;
pm_pcictrl_reg
=
&
(
host
->
pm_pcictrl_reg
);
addrbase
=
host
->
pcictrl_mmiobase
;
writeb
(
pm_pcictrl_reg
->
pciclkgat_reg
,
addrbase
+
VIA_CRDR_PCICLKGATT
);
writeb
(
pm_pcictrl_reg
->
pcisdclk_reg
,
addrbase
+
VIA_CRDR_PCISDCCLK
);
writeb
(
pm_pcictrl_reg
->
pcidmaclk_reg
,
addrbase
+
VIA_CRDR_PCIDMACLK
);
writeb
(
pm_pcictrl_reg
->
pciintctrl_reg
,
addrbase
+
VIA_CRDR_PCIINTCTRL
);
writeb
(
pm_pcictrl_reg
->
pciintstatus_reg
,
addrbase
+
VIA_CRDR_PCIINTSTATUS
);
writeb
(
pm_pcictrl_reg
->
pcitmoctrl_reg
,
addrbase
+
VIA_CRDR_PCITMOCTRL
);
}
static
void
via_save_sdcreg
(
struct
via_crdr_mmc_host
*
host
)
{
struct
sdhcreg
*
pm_sdhc_reg
;
void
__iomem
*
addrbase
;
pm_sdhc_reg
=
&
(
host
->
pm_sdhc_reg
);
addrbase
=
host
->
sdhc_mmiobase
;
pm_sdhc_reg
->
sdcontrol_reg
=
readl
(
addrbase
+
VIA_CRDR_SDCTRL
);
pm_sdhc_reg
->
sdcmdarg_reg
=
readl
(
addrbase
+
VIA_CRDR_SDCARG
);
pm_sdhc_reg
->
sdbusmode_reg
=
readl
(
addrbase
+
VIA_CRDR_SDBUSMODE
);
pm_sdhc_reg
->
sdblklen_reg
=
readl
(
addrbase
+
VIA_CRDR_SDBLKLEN
);
pm_sdhc_reg
->
sdcurblkcnt_reg
=
readl
(
addrbase
+
VIA_CRDR_SDCURBLKCNT
);
pm_sdhc_reg
->
sdintmask_reg
=
readl
(
addrbase
+
VIA_CRDR_SDINTMASK
);
pm_sdhc_reg
->
sdstatus_reg
=
readl
(
addrbase
+
VIA_CRDR_SDSTATUS
);
pm_sdhc_reg
->
sdrsptmo_reg
=
readl
(
addrbase
+
VIA_CRDR_SDRSPTMO
);
pm_sdhc_reg
->
sdclksel_reg
=
readl
(
addrbase
+
VIA_CRDR_SDCLKSEL
);
pm_sdhc_reg
->
sdextctrl_reg
=
readl
(
addrbase
+
VIA_CRDR_SDEXTCTRL
);
}
static
void
via_restore_sdcreg
(
struct
via_crdr_mmc_host
*
host
)
{
struct
sdhcreg
*
pm_sdhc_reg
;
void
__iomem
*
addrbase
;
pm_sdhc_reg
=
&
(
host
->
pm_sdhc_reg
);
addrbase
=
host
->
sdhc_mmiobase
;
writel
(
pm_sdhc_reg
->
sdcontrol_reg
,
addrbase
+
VIA_CRDR_SDCTRL
);
writel
(
pm_sdhc_reg
->
sdcmdarg_reg
,
addrbase
+
VIA_CRDR_SDCARG
);
writel
(
pm_sdhc_reg
->
sdbusmode_reg
,
addrbase
+
VIA_CRDR_SDBUSMODE
);
writel
(
pm_sdhc_reg
->
sdblklen_reg
,
addrbase
+
VIA_CRDR_SDBLKLEN
);
writel
(
pm_sdhc_reg
->
sdcurblkcnt_reg
,
addrbase
+
VIA_CRDR_SDCURBLKCNT
);
writel
(
pm_sdhc_reg
->
sdintmask_reg
,
addrbase
+
VIA_CRDR_SDINTMASK
);
writel
(
pm_sdhc_reg
->
sdstatus_reg
,
addrbase
+
VIA_CRDR_SDSTATUS
);
writel
(
pm_sdhc_reg
->
sdrsptmo_reg
,
addrbase
+
VIA_CRDR_SDRSPTMO
);
writel
(
pm_sdhc_reg
->
sdclksel_reg
,
addrbase
+
VIA_CRDR_SDCLKSEL
);
writel
(
pm_sdhc_reg
->
sdextctrl_reg
,
addrbase
+
VIA_CRDR_SDEXTCTRL
);
}
static
void
via_pwron_sleep
(
struct
via_crdr_mmc_host
*
sdhost
)
{
if
(
sdhost
->
quirks
&
VIA_CRDR_QUIRK_300MS_PWRDELAY
)
msleep
(
300
);
else
msleep
(
3
);
}
static
void
via_set_ddma
(
struct
via_crdr_mmc_host
*
host
,
dma_addr_t
dmaaddr
,
u32
count
,
int
dir
,
int
enirq
)
{
void
__iomem
*
addrbase
;
u32
ctrl_data
=
0
;
if
(
enirq
)
ctrl_data
|=
VIA_CRDR_DMACTRL_ENIRQ
;
if
(
dir
)
ctrl_data
|=
VIA_CRDR_DMACTRL_DIR
;
addrbase
=
host
->
ddma_mmiobase
;
writel
(
dmaaddr
,
addrbase
+
VIA_CRDR_DMABASEADD
);
writel
(
count
,
addrbase
+
VIA_CRDR_DMACOUNTER
);
writel
(
ctrl_data
,
addrbase
+
VIA_CRDR_DMACTRL
);
writel
(
0x01
,
addrbase
+
VIA_CRDR_DMASTART
);
/* It seems that our DMA can not work normally with 375kHz clock */
/* FIXME: don't brute-force 8MHz but use PIO at 375kHz !! */
addrbase
=
host
->
pcictrl_mmiobase
;
if
(
readb
(
addrbase
+
VIA_CRDR_PCISDCCLK
)
==
PCI_CLK_375K
)
{
dev_info
(
host
->
mmc
->
parent
,
"forcing card speed to 8MHz
\n
"
);
writeb
(
PCI_CLK_8M
,
addrbase
+
VIA_CRDR_PCISDCCLK
);
}
}
static
void
via_sdc_preparedata
(
struct
via_crdr_mmc_host
*
host
,
struct
mmc_data
*
data
)
{
void
__iomem
*
addrbase
;
u32
blk_reg
;
int
count
;
WARN_ON
(
host
->
data
);
/* Sanity checks */
BUG_ON
(
data
->
blksz
>
host
->
mmc
->
max_blk_size
);
BUG_ON
(
data
->
blocks
>
host
->
mmc
->
max_blk_count
);
host
->
data
=
data
;
count
=
dma_map_sg
(
mmc_dev
(
host
->
mmc
),
data
->
sg
,
data
->
sg_len
,
((
data
->
flags
&
MMC_DATA_READ
)
?
PCI_DMA_FROMDEVICE
:
PCI_DMA_TODEVICE
));
BUG_ON
(
count
!=
1
);
via_set_ddma
(
host
,
sg_dma_address
(
data
->
sg
),
sg_dma_len
(
data
->
sg
),
(
data
->
flags
&
MMC_DATA_WRITE
)
?
1
:
0
,
1
);
addrbase
=
host
->
sdhc_mmiobase
;
blk_reg
=
data
->
blksz
-
1
;
blk_reg
|=
VIA_CRDR_SDBLKLEN_GPIDET
|
VIA_CRDR_SDBLKLEN_INTEN
;
blk_reg
|=
(
data
->
blocks
)
<<
16
;
writel
(
blk_reg
,
addrbase
+
VIA_CRDR_SDBLKLEN
);
}
static
void
via_sdc_get_response
(
struct
via_crdr_mmc_host
*
host
,
struct
mmc_command
*
cmd
)
{
void
__iomem
*
addrbase
=
host
->
sdhc_mmiobase
;
u32
dwdata0
=
readl
(
addrbase
+
VIA_CRDR_SDRESP0
);
u32
dwdata1
=
readl
(
addrbase
+
VIA_CRDR_SDRESP1
);
u32
dwdata2
=
readl
(
addrbase
+
VIA_CRDR_SDRESP2
);
u32
dwdata3
=
readl
(
addrbase
+
VIA_CRDR_SDRESP3
);
if
(
cmd
->
flags
&
MMC_RSP_136
)
{
cmd
->
resp
[
0
]
=
((
u8
)
(
dwdata1
))
|
(((
u8
)
(
dwdata0
>>
24
))
<<
8
)
|
(((
u8
)
(
dwdata0
>>
16
))
<<
16
)
|
(((
u8
)
(
dwdata0
>>
8
))
<<
24
);
cmd
->
resp
[
1
]
=
((
u8
)
(
dwdata2
))
|
(((
u8
)
(
dwdata1
>>
24
))
<<
8
)
|
(((
u8
)
(
dwdata1
>>
16
))
<<
16
)
|
(((
u8
)
(
dwdata1
>>
8
))
<<
24
);
cmd
->
resp
[
2
]
=
((
u8
)
(
dwdata3
))
|
(((
u8
)
(
dwdata2
>>
24
))
<<
8
)
|
(((
u8
)
(
dwdata2
>>
16
))
<<
16
)
|
(((
u8
)
(
dwdata2
>>
8
))
<<
24
);
cmd
->
resp
[
3
]
=
0xff
|
((((
u8
)
(
dwdata3
>>
24
)))
<<
8
)
|
(((
u8
)
(
dwdata3
>>
16
))
<<
16
)
|
(((
u8
)
(
dwdata3
>>
8
))
<<
24
);
}
else
{
dwdata0
>>=
8
;
cmd
->
resp
[
0
]
=
((
dwdata0
&
0xff
)
<<
24
)
|
(((
dwdata0
>>
8
)
&
0xff
)
<<
16
)
|
(((
dwdata0
>>
16
)
&
0xff
)
<<
8
)
|
(
dwdata1
&
0xff
);
dwdata1
>>=
8
;
cmd
->
resp
[
1
]
=
((
dwdata1
&
0xff
)
<<
24
)
|
(((
dwdata1
>>
8
)
&
0xff
)
<<
16
)
|
(((
dwdata1
>>
16
)
&
0xff
)
<<
8
);
}
}
static
void
via_sdc_send_command
(
struct
via_crdr_mmc_host
*
host
,
struct
mmc_command
*
cmd
)
{
void
__iomem
*
addrbase
;
struct
mmc_data
*
data
;
u32
cmdctrl
=
0
;
WARN_ON
(
host
->
cmd
);
data
=
cmd
->
data
;
mod_timer
(
&
host
->
timer
,
jiffies
+
HZ
);
host
->
cmd
=
cmd
;
/*Command index*/
cmdctrl
=
cmd
->
opcode
<<
8
;
/*Response type*/
switch
(
mmc_resp_type
(
cmd
))
{
case
MMC_RSP_NONE
:
cmdctrl
|=
VIA_CRDR_SDCTRL_RSP_NONE
;
break
;
case
MMC_RSP_R1
:
cmdctrl
|=
VIA_CRDR_SDCTRL_RSP_R1
;
break
;
case
MMC_RSP_R1B
:
cmdctrl
|=
VIA_CRDR_SDCTRL_RSP_R1B
;
break
;
case
MMC_RSP_R2
:
cmdctrl
|=
VIA_CRDR_SDCTRL_RSP_R2
;
break
;
case
MMC_RSP_R3
:
cmdctrl
|=
VIA_CRDR_SDCTRL_RSP_R3
;
break
;
default:
pr_err
(
"%s: cmd->flag is not valid
\n
"
,
mmc_hostname
(
host
->
mmc
));
break
;
}
if
(
!
(
cmd
->
data
))
goto
nodata
;
via_sdc_preparedata
(
host
,
data
);
/*Command control*/
if
(
data
->
blocks
>
1
)
{
if
(
data
->
flags
&
MMC_DATA_WRITE
)
{
cmdctrl
|=
VIA_CRDR_SDCTRL_WRITE
;
cmdctrl
|=
VIA_CRDR_SDCTRL_MULTI_WR
;
}
else
{
cmdctrl
|=
VIA_CRDR_SDCTRL_MULTI_RD
;
}
}
else
{
if
(
data
->
flags
&
MMC_DATA_WRITE
)
{
cmdctrl
|=
VIA_CRDR_SDCTRL_WRITE
;
cmdctrl
|=
VIA_CRDR_SDCTRL_SINGLE_WR
;
}
else
{
cmdctrl
|=
VIA_CRDR_SDCTRL_SINGLE_RD
;
}
}
nodata:
if
(
cmd
==
host
->
mrq
->
stop
)
cmdctrl
|=
VIA_CRDR_SDCTRL_STOP
;
cmdctrl
|=
VIA_CRDR_SDCTRL_START
;
addrbase
=
host
->
sdhc_mmiobase
;
writel
(
cmd
->
arg
,
addrbase
+
VIA_CRDR_SDCARG
);
writel
(
cmdctrl
,
addrbase
+
VIA_CRDR_SDCTRL
);
}
static
void
via_sdc_finish_data
(
struct
via_crdr_mmc_host
*
host
)
{
struct
mmc_data
*
data
;
BUG_ON
(
!
host
->
data
);
data
=
host
->
data
;
host
->
data
=
NULL
;
if
(
data
->
error
)
data
->
bytes_xfered
=
0
;
else
data
->
bytes_xfered
=
data
->
blocks
*
data
->
blksz
;
dma_unmap_sg
(
mmc_dev
(
host
->
mmc
),
data
->
sg
,
data
->
sg_len
,
((
data
->
flags
&
MMC_DATA_READ
)
?
PCI_DMA_FROMDEVICE
:
PCI_DMA_TODEVICE
));
if
(
data
->
stop
)
via_sdc_send_command
(
host
,
data
->
stop
);
else
tasklet_schedule
(
&
host
->
finish_tasklet
);
}
static
void
via_sdc_finish_command
(
struct
via_crdr_mmc_host
*
host
)
{
via_sdc_get_response
(
host
,
host
->
cmd
);
host
->
cmd
->
error
=
0
;
if
(
!
host
->
cmd
->
data
)
tasklet_schedule
(
&
host
->
finish_tasklet
);
host
->
cmd
=
NULL
;
}
static
void
via_sdc_request
(
struct
mmc_host
*
mmc
,
struct
mmc_request
*
mrq
)
{
void
__iomem
*
addrbase
;
struct
via_crdr_mmc_host
*
host
;
unsigned
long
flags
;
u16
status
;
host
=
mmc_priv
(
mmc
);
spin_lock_irqsave
(
&
host
->
lock
,
flags
);
addrbase
=
host
->
pcictrl_mmiobase
;
writeb
(
VIA_CRDR_PCIDMACLK_SDC
,
addrbase
+
VIA_CRDR_PCIDMACLK
);
status
=
readw
(
host
->
sdhc_mmiobase
+
VIA_CRDR_SDSTATUS
);
status
&=
VIA_CRDR_SDSTS_W1C_MASK
;
writew
(
status
,
host
->
sdhc_mmiobase
+
VIA_CRDR_SDSTATUS
);
WARN_ON
(
host
->
mrq
!=
NULL
);
host
->
mrq
=
mrq
;
status
=
readw
(
host
->
sdhc_mmiobase
+
VIA_CRDR_SDSTATUS
);
if
(
!
(
status
&
VIA_CRDR_SDSTS_SLOTG
)
||
host
->
reject
)
{
host
->
mrq
->
cmd
->
error
=
-
ENOMEDIUM
;
tasklet_schedule
(
&
host
->
finish_tasklet
);
}
else
{
via_sdc_send_command
(
host
,
mrq
->
cmd
);
}
mmiowb
();
spin_unlock_irqrestore
(
&
host
->
lock
,
flags
);
}
static
void
via_sdc_set_power
(
struct
via_crdr_mmc_host
*
host
,
unsigned
short
power
,
unsigned
int
on
)
{
unsigned
long
flags
;
u8
gatt
;
spin_lock_irqsave
(
&
host
->
lock
,
flags
);
host
->
power
=
(
1
<<
power
);
gatt
=
readb
(
host
->
pcictrl_mmiobase
+
VIA_CRDR_PCICLKGATT
);
if
(
host
->
power
==
MMC_VDD_165_195
)
gatt
&=
~
VIA_CRDR_PCICLKGATT_3V3
;
else
gatt
|=
VIA_CRDR_PCICLKGATT_3V3
;
if
(
on
)
gatt
|=
VIA_CRDR_PCICLKGATT_PAD_PWRON
;
else
gatt
&=
~
VIA_CRDR_PCICLKGATT_PAD_PWRON
;
writeb
(
gatt
,
host
->
pcictrl_mmiobase
+
VIA_CRDR_PCICLKGATT
);
mmiowb
();
spin_unlock_irqrestore
(
&
host
->
lock
,
flags
);
via_pwron_sleep
(
host
);
}
static
void
via_sdc_set_ios
(
struct
mmc_host
*
mmc
,
struct
mmc_ios
*
ios
)
{
struct
via_crdr_mmc_host
*
host
;
unsigned
long
flags
;
void
__iomem
*
addrbase
;
u32
org_data
,
sdextctrl
;
u8
clock
;
host
=
mmc_priv
(
mmc
);
spin_lock_irqsave
(
&
host
->
lock
,
flags
);
addrbase
=
host
->
sdhc_mmiobase
;
org_data
=
readl
(
addrbase
+
VIA_CRDR_SDBUSMODE
);
sdextctrl
=
readl
(
addrbase
+
VIA_CRDR_SDEXTCTRL
);
if
(
ios
->
bus_width
==
MMC_BUS_WIDTH_1
)
org_data
&=
~
VIA_CRDR_SDMODE_4BIT
;
else
org_data
|=
VIA_CRDR_SDMODE_4BIT
;
if
(
ios
->
power_mode
==
MMC_POWER_OFF
)
org_data
&=
~
VIA_CRDR_SDMODE_CLK_ON
;
else
org_data
|=
VIA_CRDR_SDMODE_CLK_ON
;
if
(
ios
->
timing
==
MMC_TIMING_SD_HS
)
sdextctrl
|=
VIA_CRDR_SDEXTCTRL_HISPD
;
else
sdextctrl
&=
~
VIA_CRDR_SDEXTCTRL_HISPD
;
writel
(
org_data
,
addrbase
+
VIA_CRDR_SDBUSMODE
);
writel
(
sdextctrl
,
addrbase
+
VIA_CRDR_SDEXTCTRL
);
if
(
ios
->
clock
>=
48000000
)
clock
=
PCI_CLK_48M
;
else
if
(
ios
->
clock
>=
33000000
)
clock
=
PCI_CLK_33M
;
else
if
(
ios
->
clock
>=
24000000
)
clock
=
PCI_CLK_24M
;
else
if
(
ios
->
clock
>=
16000000
)
clock
=
PCI_CLK_16M
;
else
if
(
ios
->
clock
>=
12000000
)
clock
=
PCI_CLK_12M
;
else
if
(
ios
->
clock
>=
8000000
)
clock
=
PCI_CLK_8M
;
else
clock
=
PCI_CLK_375K
;
addrbase
=
host
->
pcictrl_mmiobase
;
if
(
readb
(
addrbase
+
VIA_CRDR_PCISDCCLK
)
!=
clock
)
writeb
(
clock
,
addrbase
+
VIA_CRDR_PCISDCCLK
);
mmiowb
();
spin_unlock_irqrestore
(
&
host
->
lock
,
flags
);
if
(
ios
->
power_mode
!=
MMC_POWER_OFF
)
via_sdc_set_power
(
host
,
ios
->
vdd
,
1
);
else
via_sdc_set_power
(
host
,
ios
->
vdd
,
0
);
}
static
int
via_sdc_get_ro
(
struct
mmc_host
*
mmc
)
{
struct
via_crdr_mmc_host
*
host
;
unsigned
long
flags
;
u16
status
;
host
=
mmc_priv
(
mmc
);
spin_lock_irqsave
(
&
host
->
lock
,
flags
);
status
=
readw
(
host
->
sdhc_mmiobase
+
VIA_CRDR_SDSTATUS
);
spin_unlock_irqrestore
(
&
host
->
lock
,
flags
);
return
!
(
status
&
VIA_CRDR_SDSTS_WP
);
}
static
const
struct
mmc_host_ops
via_sdc_ops
=
{
.
request
=
via_sdc_request
,
.
set_ios
=
via_sdc_set_ios
,
.
get_ro
=
via_sdc_get_ro
,
};
static
void
via_reset_pcictrl
(
struct
via_crdr_mmc_host
*
host
)
{
void
__iomem
*
addrbase
;
unsigned
long
flags
;
u8
gatt
;
addrbase
=
host
->
pcictrl_mmiobase
;
spin_lock_irqsave
(
&
host
->
lock
,
flags
);
via_save_pcictrlreg
(
host
);
via_save_sdcreg
(
host
);
spin_unlock_irqrestore
(
&
host
->
lock
,
flags
);
gatt
=
VIA_CRDR_PCICLKGATT_PAD_PWRON
;
if
(
host
->
power
==
MMC_VDD_165_195
)
gatt
&=
VIA_CRDR_PCICLKGATT_3V3
;
else
gatt
|=
VIA_CRDR_PCICLKGATT_3V3
;
writeb
(
gatt
,
host
->
pcictrl_mmiobase
+
VIA_CRDR_PCICLKGATT
);
via_pwron_sleep
(
host
);
gatt
|=
VIA_CRDR_PCICLKGATT_SFTRST
;
writeb
(
gatt
,
host
->
pcictrl_mmiobase
+
VIA_CRDR_PCICLKGATT
);
msleep
(
3
);
spin_lock_irqsave
(
&
host
->
lock
,
flags
);
via_restore_pcictrlreg
(
host
);
via_restore_sdcreg
(
host
);
mmiowb
();
spin_unlock_irqrestore
(
&
host
->
lock
,
flags
);
}
static
void
via_sdc_cmd_isr
(
struct
via_crdr_mmc_host
*
host
,
u16
intmask
)
{
BUG_ON
(
intmask
==
0
);
if
(
!
host
->
cmd
)
{
pr_err
(
"%s: Got command interrupt 0x%x even "
"though no command operation was in progress.
\n
"
,
mmc_hostname
(
host
->
mmc
),
intmask
);
return
;
}
if
(
intmask
&
VIA_CRDR_SDSTS_CRTO
)
host
->
cmd
->
error
=
-
ETIMEDOUT
;
else
if
(
intmask
&
VIA_CRDR_SDSTS_SC
)
host
->
cmd
->
error
=
-
EILSEQ
;
if
(
host
->
cmd
->
error
)
tasklet_schedule
(
&
host
->
finish_tasklet
);
else
if
(
intmask
&
VIA_CRDR_SDSTS_CRD
)
via_sdc_finish_command
(
host
);
}
static
void
via_sdc_data_isr
(
struct
via_crdr_mmc_host
*
host
,
u16
intmask
)
{
BUG_ON
(
intmask
==
0
);
if
(
intmask
&
VIA_CRDR_SDSTS_DT
)
host
->
data
->
error
=
-
ETIMEDOUT
;
else
if
(
intmask
&
(
VIA_CRDR_SDSTS_RC
|
VIA_CRDR_SDSTS_WC
))
host
->
data
->
error
=
-
EILSEQ
;
via_sdc_finish_data
(
host
);
}
static
irqreturn_t
via_sdc_isr
(
int
irq
,
void
*
dev_id
)
{
struct
via_crdr_mmc_host
*
sdhost
=
dev_id
;
void
__iomem
*
addrbase
;
u8
pci_status
;
u16
sd_status
;
irqreturn_t
result
;
if
(
!
sdhost
)
return
IRQ_NONE
;
spin_lock
(
&
sdhost
->
lock
);
addrbase
=
sdhost
->
pcictrl_mmiobase
;
pci_status
=
readb
(
addrbase
+
VIA_CRDR_PCIINTSTATUS
);
if
(
!
(
pci_status
&
VIA_CRDR_PCIINTSTATUS_SDC
))
{
result
=
IRQ_NONE
;
goto
out
;
}
addrbase
=
sdhost
->
sdhc_mmiobase
;
sd_status
=
readw
(
addrbase
+
VIA_CRDR_SDSTATUS
);
sd_status
&=
VIA_CRDR_SDSTS_INT_MASK
;
sd_status
&=
~
VIA_CRDR_SDSTS_IGN_MASK
;
if
(
!
sd_status
)
{
result
=
IRQ_NONE
;
goto
out
;
}
if
(
sd_status
&
VIA_CRDR_SDSTS_CIR
)
{
writew
(
sd_status
&
VIA_CRDR_SDSTS_CIR
,
addrbase
+
VIA_CRDR_SDSTATUS
);
schedule_work
(
&
sdhost
->
carddet_work
);
}
sd_status
&=
~
VIA_CRDR_SDSTS_CIR
;
if
(
sd_status
&
VIA_CRDR_SDSTS_CMD_MASK
)
{
writew
(
sd_status
&
VIA_CRDR_SDSTS_CMD_MASK
,
addrbase
+
VIA_CRDR_SDSTATUS
);
via_sdc_cmd_isr
(
sdhost
,
sd_status
&
VIA_CRDR_SDSTS_CMD_MASK
);
}
if
(
sd_status
&
VIA_CRDR_SDSTS_DATA_MASK
)
{
writew
(
sd_status
&
VIA_CRDR_SDSTS_DATA_MASK
,
addrbase
+
VIA_CRDR_SDSTATUS
);
via_sdc_data_isr
(
sdhost
,
sd_status
&
VIA_CRDR_SDSTS_DATA_MASK
);
}
sd_status
&=
~
(
VIA_CRDR_SDSTS_CMD_MASK
|
VIA_CRDR_SDSTS_DATA_MASK
);
if
(
sd_status
)
{
pr_err
(
"%s: Unexpected interrupt 0x%x
\n
"
,
mmc_hostname
(
sdhost
->
mmc
),
sd_status
);
writew
(
sd_status
,
addrbase
+
VIA_CRDR_SDSTATUS
);
}
result
=
IRQ_HANDLED
;
mmiowb
();
out:
spin_unlock
(
&
sdhost
->
lock
);
return
result
;
}
static
void
via_sdc_timeout
(
unsigned
long
ulongdata
)
{
struct
via_crdr_mmc_host
*
sdhost
;
unsigned
long
flags
;
sdhost
=
(
struct
via_crdr_mmc_host
*
)
ulongdata
;
spin_lock_irqsave
(
&
sdhost
->
lock
,
flags
);
if
(
sdhost
->
mrq
)
{
pr_err
(
"%s: Timeout waiting for hardware interrupt."
"cmd:0x%x
\n
"
,
mmc_hostname
(
sdhost
->
mmc
),
sdhost
->
mrq
->
cmd
->
opcode
);
if
(
sdhost
->
data
)
{
writel
(
VIA_CRDR_DMACTRL_SFTRST
,
sdhost
->
ddma_mmiobase
+
VIA_CRDR_DMACTRL
);
sdhost
->
data
->
error
=
-
ETIMEDOUT
;
via_sdc_finish_data
(
sdhost
);
}
else
{
if
(
sdhost
->
cmd
)
sdhost
->
cmd
->
error
=
-
ETIMEDOUT
;
else
sdhost
->
mrq
->
cmd
->
error
=
-
ETIMEDOUT
;
tasklet_schedule
(
&
sdhost
->
finish_tasklet
);
}
}
mmiowb
();
spin_unlock_irqrestore
(
&
sdhost
->
lock
,
flags
);
}
static
void
via_sdc_tasklet_finish
(
unsigned
long
param
)
{
struct
via_crdr_mmc_host
*
host
;
unsigned
long
flags
;
struct
mmc_request
*
mrq
;
host
=
(
struct
via_crdr_mmc_host
*
)
param
;
spin_lock_irqsave
(
&
host
->
lock
,
flags
);
del_timer
(
&
host
->
timer
);
mrq
=
host
->
mrq
;
host
->
mrq
=
NULL
;
host
->
cmd
=
NULL
;
host
->
data
=
NULL
;
spin_unlock_irqrestore
(
&
host
->
lock
,
flags
);
mmc_request_done
(
host
->
mmc
,
mrq
);
}
static
void
via_sdc_card_detect
(
struct
work_struct
*
work
)
{
struct
via_crdr_mmc_host
*
host
;
void
__iomem
*
addrbase
;
unsigned
long
flags
;
u16
status
;
host
=
container_of
(
work
,
struct
via_crdr_mmc_host
,
carddet_work
);
addrbase
=
host
->
ddma_mmiobase
;
writel
(
VIA_CRDR_DMACTRL_SFTRST
,
addrbase
+
VIA_CRDR_DMACTRL
);
spin_lock_irqsave
(
&
host
->
lock
,
flags
);
addrbase
=
host
->
pcictrl_mmiobase
;
writeb
(
VIA_CRDR_PCIDMACLK_SDC
,
addrbase
+
VIA_CRDR_PCIDMACLK
);
addrbase
=
host
->
sdhc_mmiobase
;
status
=
readw
(
addrbase
+
VIA_CRDR_SDSTATUS
);
if
(
!
(
status
&
VIA_CRDR_SDSTS_SLOTG
))
{
if
(
host
->
mrq
)
{
pr_err
(
"%s: Card removed during transfer!
\n
"
,
mmc_hostname
(
host
->
mmc
));
host
->
mrq
->
cmd
->
error
=
-
ENOMEDIUM
;
tasklet_schedule
(
&
host
->
finish_tasklet
);
}
mmiowb
();
spin_unlock_irqrestore
(
&
host
->
lock
,
flags
);
via_reset_pcictrl
(
host
);
spin_lock_irqsave
(
&
host
->
lock
,
flags
);
}
mmiowb
();
spin_unlock_irqrestore
(
&
host
->
lock
,
flags
);
via_print_pcictrl
(
host
);
via_print_sdchc
(
host
);
mmc_detect_change
(
host
->
mmc
,
msecs_to_jiffies
(
500
));
}
static
void
via_init_mmc_host
(
struct
via_crdr_mmc_host
*
host
)
{
struct
mmc_host
*
mmc
=
host
->
mmc
;
void
__iomem
*
addrbase
;
u32
lenreg
;
u32
status
;
init_timer
(
&
host
->
timer
);
host
->
timer
.
data
=
(
unsigned
long
)
host
;
host
->
timer
.
function
=
via_sdc_timeout
;
spin_lock_init
(
&
host
->
lock
);
mmc
->
f_min
=
VIA_CRDR_MIN_CLOCK
;
mmc
->
f_max
=
VIA_CRDR_MAX_CLOCK
;
mmc
->
ocr_avail
=
MMC_VDD_32_33
|
MMC_VDD_33_34
|
MMC_VDD_165_195
;
mmc
->
caps
=
MMC_CAP_4_BIT_DATA
|
MMC_CAP_SD_HIGHSPEED
;
mmc
->
ops
=
&
via_sdc_ops
;
/*Hardware cannot do scatter lists*/
mmc
->
max_hw_segs
=
1
;
mmc
->
max_phys_segs
=
1
;
mmc
->
max_blk_size
=
VIA_CRDR_MAX_BLOCK_LENGTH
;
mmc
->
max_blk_count
=
VIA_CRDR_MAX_BLOCK_COUNT
;
mmc
->
max_seg_size
=
mmc
->
max_blk_size
*
mmc
->
max_blk_count
;
mmc
->
max_req_size
=
mmc
->
max_seg_size
;
INIT_WORK
(
&
host
->
carddet_work
,
via_sdc_card_detect
);
tasklet_init
(
&
host
->
finish_tasklet
,
via_sdc_tasklet_finish
,
(
unsigned
long
)
host
);
addrbase
=
host
->
sdhc_mmiobase
;
writel
(
0x0
,
addrbase
+
VIA_CRDR_SDINTMASK
);
msleep
(
1
);
lenreg
=
VIA_CRDR_SDBLKLEN_GPIDET
|
VIA_CRDR_SDBLKLEN_INTEN
;
writel
(
lenreg
,
addrbase
+
VIA_CRDR_SDBLKLEN
);
status
=
readw
(
addrbase
+
VIA_CRDR_SDSTATUS
);
status
&=
VIA_CRDR_SDSTS_W1C_MASK
;
writew
(
status
,
addrbase
+
VIA_CRDR_SDSTATUS
);
status
=
readw
(
addrbase
+
VIA_CRDR_SDSTATUS2
);
status
|=
VIA_CRDR_SDSTS_CFE
;
writew
(
status
,
addrbase
+
VIA_CRDR_SDSTATUS2
);
writeb
(
0x0
,
addrbase
+
VIA_CRDR_SDEXTCTRL
);
writel
(
VIA_CRDR_SDACTIVE_INTMASK
,
addrbase
+
VIA_CRDR_SDINTMASK
);
msleep
(
1
);
}
static
int
__devinit
via_sd_probe
(
struct
pci_dev
*
pcidev
,
const
struct
pci_device_id
*
id
)
{
struct
mmc_host
*
mmc
;
struct
via_crdr_mmc_host
*
sdhost
;
u32
base
,
len
;
u8
rev
,
gatt
;
int
ret
;
pci_read_config_byte
(
pcidev
,
PCI_CLASS_REVISION
,
&
rev
);
pr_info
(
DRV_NAME
": VIA SDMMC controller found at %s [%04x:%04x] (rev %x)
\n
"
,
pci_name
(
pcidev
),
(
int
)
pcidev
->
vendor
,
(
int
)
pcidev
->
device
,
(
int
)
rev
);
ret
=
pci_enable_device
(
pcidev
);
if
(
ret
)
return
ret
;
ret
=
pci_request_regions
(
pcidev
,
DRV_NAME
);
if
(
ret
)
goto
disable
;
pci_write_config_byte
(
pcidev
,
VIA_CRDR_PCI_WORK_MODE
,
0
);
pci_write_config_byte
(
pcidev
,
VIA_CRDR_PCI_DBG_MODE
,
0
);
mmc
=
mmc_alloc_host
(
sizeof
(
struct
via_crdr_mmc_host
),
&
pcidev
->
dev
);
if
(
!
mmc
)
{
ret
=
-
ENOMEM
;
goto
release
;
}
sdhost
=
mmc_priv
(
mmc
);
sdhost
->
mmc
=
mmc
;
dev_set_drvdata
(
&
pcidev
->
dev
,
sdhost
);
len
=
pci_resource_len
(
pcidev
,
0
);
base
=
pci_resource_start
(
pcidev
,
0
);
sdhost
->
mmiobase
=
ioremap_nocache
(
base
,
len
);
if
(
!
sdhost
->
mmiobase
)
{
ret
=
-
ENOMEM
;
goto
free_mmc_host
;
}
sdhost
->
sdhc_mmiobase
=
sdhost
->
mmiobase
+
VIA_CRDR_SDC_OFF
;
sdhost
->
ddma_mmiobase
=
sdhost
->
mmiobase
+
VIA_CRDR_DDMA_OFF
;
sdhost
->
pcictrl_mmiobase
=
sdhost
->
mmiobase
+
VIA_CRDR_PCICTRL_OFF
;
sdhost
->
power
=
MMC_VDD_165_195
;
gatt
=
VIA_CRDR_PCICLKGATT_3V3
|
VIA_CRDR_PCICLKGATT_PAD_PWRON
;
writeb
(
gatt
,
sdhost
->
pcictrl_mmiobase
+
VIA_CRDR_PCICLKGATT
);
via_pwron_sleep
(
sdhost
);
gatt
|=
VIA_CRDR_PCICLKGATT_SFTRST
;
writeb
(
gatt
,
sdhost
->
pcictrl_mmiobase
+
VIA_CRDR_PCICLKGATT
);
msleep
(
3
);
via_init_mmc_host
(
sdhost
);
ret
=
request_irq
(
pcidev
->
irq
,
via_sdc_isr
,
IRQF_SHARED
,
DRV_NAME
,
sdhost
);
if
(
ret
)
goto
unmap
;
writeb
(
VIA_CRDR_PCIINTCTRL_SDCIRQEN
,
sdhost
->
pcictrl_mmiobase
+
VIA_CRDR_PCIINTCTRL
);
writeb
(
VIA_CRDR_PCITMOCTRL_1024MS
,
sdhost
->
pcictrl_mmiobase
+
VIA_CRDR_PCITMOCTRL
);
/* device-specific quirks */
if
(
pcidev
->
subsystem_vendor
==
PCI_VENDOR_ID_LENOVO
&&
pcidev
->
subsystem_device
==
0x3891
)
sdhost
->
quirks
=
VIA_CRDR_QUIRK_300MS_PWRDELAY
;
mmc_add_host
(
mmc
);
return
0
;
unmap:
iounmap
(
sdhost
->
mmiobase
);
free_mmc_host:
dev_set_drvdata
(
&
pcidev
->
dev
,
NULL
);
mmc_free_host
(
mmc
);
release:
pci_release_regions
(
pcidev
);
disable:
pci_disable_device
(
pcidev
);
return
ret
;
}
static
void
__devexit
via_sd_remove
(
struct
pci_dev
*
pcidev
)
{
struct
via_crdr_mmc_host
*
sdhost
=
pci_get_drvdata
(
pcidev
);
unsigned
long
flags
;
u8
gatt
;
spin_lock_irqsave
(
&
sdhost
->
lock
,
flags
);
/* Ensure we don't accept more commands from mmc layer */
sdhost
->
reject
=
1
;
/* Disable generating further interrupts */
writeb
(
0x0
,
sdhost
->
pcictrl_mmiobase
+
VIA_CRDR_PCIINTCTRL
);
mmiowb
();
if
(
sdhost
->
mrq
)
{
printk
(
KERN_ERR
"%s: Controller removed during "
"transfer
\n
"
,
mmc_hostname
(
sdhost
->
mmc
));
/* make sure all DMA is stopped */
writel
(
VIA_CRDR_DMACTRL_SFTRST
,
sdhost
->
ddma_mmiobase
+
VIA_CRDR_DMACTRL
);
mmiowb
();
sdhost
->
mrq
->
cmd
->
error
=
-
ENOMEDIUM
;
if
(
sdhost
->
mrq
->
stop
)
sdhost
->
mrq
->
stop
->
error
=
-
ENOMEDIUM
;
tasklet_schedule
(
&
sdhost
->
finish_tasklet
);
}
spin_unlock_irqrestore
(
&
sdhost
->
lock
,
flags
);
mmc_remove_host
(
sdhost
->
mmc
);
free_irq
(
pcidev
->
irq
,
sdhost
);
del_timer_sync
(
&
sdhost
->
timer
);
tasklet_kill
(
&
sdhost
->
finish_tasklet
);
/* switch off power */
gatt
=
readb
(
sdhost
->
pcictrl_mmiobase
+
VIA_CRDR_PCICLKGATT
);
gatt
&=
~
VIA_CRDR_PCICLKGATT_PAD_PWRON
;
writeb
(
gatt
,
sdhost
->
pcictrl_mmiobase
+
VIA_CRDR_PCICLKGATT
);
iounmap
(
sdhost
->
mmiobase
);
dev_set_drvdata
(
&
pcidev
->
dev
,
NULL
);
mmc_free_host
(
sdhost
->
mmc
);
pci_release_regions
(
pcidev
);
pci_disable_device
(
pcidev
);
pr_info
(
DRV_NAME
": VIA SDMMC controller at %s [%04x:%04x] has been removed
\n
"
,
pci_name
(
pcidev
),
(
int
)
pcidev
->
vendor
,
(
int
)
pcidev
->
device
);
}
#ifdef CONFIG_PM
static
void
via_init_sdc_pm
(
struct
via_crdr_mmc_host
*
host
)
{
struct
sdhcreg
*
pm_sdhcreg
;
void
__iomem
*
addrbase
;
u32
lenreg
;
u16
status
;
pm_sdhcreg
=
&
(
host
->
pm_sdhc_reg
);
addrbase
=
host
->
sdhc_mmiobase
;
writel
(
0x0
,
addrbase
+
VIA_CRDR_SDINTMASK
);
lenreg
=
VIA_CRDR_SDBLKLEN_GPIDET
|
VIA_CRDR_SDBLKLEN_INTEN
;
writel
(
lenreg
,
addrbase
+
VIA_CRDR_SDBLKLEN
);
status
=
readw
(
addrbase
+
VIA_CRDR_SDSTATUS
);
status
&=
VIA_CRDR_SDSTS_W1C_MASK
;
writew
(
status
,
addrbase
+
VIA_CRDR_SDSTATUS
);
status
=
readw
(
addrbase
+
VIA_CRDR_SDSTATUS2
);
status
|=
VIA_CRDR_SDSTS_CFE
;
writew
(
status
,
addrbase
+
VIA_CRDR_SDSTATUS2
);
writel
(
pm_sdhcreg
->
sdcontrol_reg
,
addrbase
+
VIA_CRDR_SDCTRL
);
writel
(
pm_sdhcreg
->
sdcmdarg_reg
,
addrbase
+
VIA_CRDR_SDCARG
);
writel
(
pm_sdhcreg
->
sdintmask_reg
,
addrbase
+
VIA_CRDR_SDINTMASK
);
writel
(
pm_sdhcreg
->
sdrsptmo_reg
,
addrbase
+
VIA_CRDR_SDRSPTMO
);
writel
(
pm_sdhcreg
->
sdclksel_reg
,
addrbase
+
VIA_CRDR_SDCLKSEL
);
writel
(
pm_sdhcreg
->
sdextctrl_reg
,
addrbase
+
VIA_CRDR_SDEXTCTRL
);
via_print_pcictrl
(
host
);
via_print_sdchc
(
host
);
}
static
int
via_sd_suspend
(
struct
pci_dev
*
pcidev
,
pm_message_t
state
)
{
struct
via_crdr_mmc_host
*
host
;
int
ret
=
0
;
host
=
pci_get_drvdata
(
pcidev
);
via_save_pcictrlreg
(
host
);
via_save_sdcreg
(
host
);
ret
=
mmc_suspend_host
(
host
->
mmc
,
state
);
pci_save_state
(
pcidev
);
pci_enable_wake
(
pcidev
,
pci_choose_state
(
pcidev
,
state
),
0
);
pci_disable_device
(
pcidev
);
pci_set_power_state
(
pcidev
,
pci_choose_state
(
pcidev
,
state
));
return
ret
;
}
static
int
via_sd_resume
(
struct
pci_dev
*
pcidev
)
{
struct
via_crdr_mmc_host
*
sdhost
;
int
ret
=
0
;
u8
gatt
;
sdhost
=
pci_get_drvdata
(
pcidev
);
gatt
=
VIA_CRDR_PCICLKGATT_PAD_PWRON
;
if
(
sdhost
->
power
==
MMC_VDD_165_195
)
gatt
&=
~
VIA_CRDR_PCICLKGATT_3V3
;
else
gatt
|=
VIA_CRDR_PCICLKGATT_3V3
;
writeb
(
gatt
,
sdhost
->
pcictrl_mmiobase
+
VIA_CRDR_PCICLKGATT
);
via_pwron_sleep
(
sdhost
);
gatt
|=
VIA_CRDR_PCICLKGATT_SFTRST
;
writeb
(
gatt
,
sdhost
->
pcictrl_mmiobase
+
VIA_CRDR_PCICLKGATT
);
msleep
(
3
);
msleep
(
100
);
pci_set_power_state
(
pcidev
,
PCI_D0
);
pci_restore_state
(
pcidev
);
ret
=
pci_enable_device
(
pcidev
);
if
(
ret
)
return
ret
;
via_restore_pcictrlreg
(
sdhost
);
via_init_sdc_pm
(
sdhost
);
ret
=
mmc_resume_host
(
sdhost
->
mmc
);
return
ret
;
}
#else
/* CONFIG_PM */
#define via_sd_suspend NULL
#define via_sd_resume NULL
#endif
/* CONFIG_PM */
static
struct
pci_driver
via_sd_driver
=
{
.
name
=
DRV_NAME
,
.
id_table
=
via_ids
,
.
probe
=
via_sd_probe
,
.
remove
=
__devexit_p
(
via_sd_remove
),
.
suspend
=
via_sd_suspend
,
.
resume
=
via_sd_resume
,
};
static
int
__init
via_sd_drv_init
(
void
)
{
pr_info
(
DRV_NAME
": VIA SD/MMC Card Reader driver "
"(C) 2008 VIA Technologies, Inc.
\n
"
);
return
pci_register_driver
(
&
via_sd_driver
);
}
static
void
__exit
via_sd_drv_exit
(
void
)
{
pci_unregister_driver
(
&
via_sd_driver
);
}
module_init
(
via_sd_drv_init
);
module_exit
(
via_sd_drv_exit
);
MODULE_LICENSE
(
"GPL"
);
MODULE_AUTHOR
(
"VIA Technologies Inc."
);
MODULE_DESCRIPTION
(
"VIA SD/MMC Card Interface driver"
);
include/linux/mm.h
浏览文件 @
b7f797cb
...
@@ -810,11 +810,11 @@ extern int vmtruncate_range(struct inode * inode, loff_t offset, loff_t end);
...
@@ -810,11 +810,11 @@ extern int vmtruncate_range(struct inode * inode, loff_t offset, loff_t end);
#ifdef CONFIG_MMU
#ifdef CONFIG_MMU
extern
int
handle_mm_fault
(
struct
mm_struct
*
mm
,
struct
vm_area_struct
*
vma
,
extern
int
handle_mm_fault
(
struct
mm_struct
*
mm
,
struct
vm_area_struct
*
vma
,
unsigned
long
address
,
int
write_acces
s
);
unsigned
long
address
,
unsigned
int
flag
s
);
#else
#else
static
inline
int
handle_mm_fault
(
struct
mm_struct
*
mm
,
static
inline
int
handle_mm_fault
(
struct
mm_struct
*
mm
,
struct
vm_area_struct
*
vma
,
unsigned
long
address
,
struct
vm_area_struct
*
vma
,
unsigned
long
address
,
int
write_acces
s
)
unsigned
int
flag
s
)
{
{
/* should never happen if there's no MMU */
/* should never happen if there's no MMU */
BUG
();
BUG
();
...
...
ipc/util.h
浏览文件 @
b7f797cb
...
@@ -10,6 +10,7 @@
...
@@ -10,6 +10,7 @@
#ifndef _IPC_UTIL_H
#ifndef _IPC_UTIL_H
#define _IPC_UTIL_H
#define _IPC_UTIL_H
#include <linux/unistd.h>
#include <linux/err.h>
#include <linux/err.h>
#define SEQ_MULTIPLIER (IPCMNI)
#define SEQ_MULTIPLIER (IPCMNI)
...
...
lib/Kconfig.debug
浏览文件 @
b7f797cb
...
@@ -472,7 +472,7 @@ config LOCKDEP
...
@@ -472,7 +472,7 @@ config LOCKDEP
bool
bool
depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
select STACKTRACE
select STACKTRACE
select FRAME_POINTER if !
X86 && !
MIPS && !PPC && !ARM_UNWIND && !S390
select FRAME_POINTER if !MIPS && !PPC && !ARM_UNWIND && !S390
select KALLSYMS
select KALLSYMS
select KALLSYMS_ALL
select KALLSYMS_ALL
...
...
lib/dma-debug.c
浏览文件 @
b7f797cb
...
@@ -262,11 +262,12 @@ static struct dma_debug_entry *hash_bucket_find(struct hash_bucket *bucket,
...
@@ -262,11 +262,12 @@ static struct dma_debug_entry *hash_bucket_find(struct hash_bucket *bucket,
*/
*/
matches
+=
1
;
matches
+=
1
;
match_lvl
=
0
;
match_lvl
=
0
;
entry
->
size
==
ref
->
size
?
++
match_lvl
:
match_lvl
;
entry
->
size
==
ref
->
size
?
++
match_lvl
:
0
;
entry
->
type
==
ref
->
type
?
++
match_lvl
:
match_lvl
;
entry
->
type
==
ref
->
type
?
++
match_lvl
:
0
;
entry
->
direction
==
ref
->
direction
?
++
match_lvl
:
match_lvl
;
entry
->
direction
==
ref
->
direction
?
++
match_lvl
:
0
;
entry
->
sg_call_ents
==
ref
->
sg_call_ents
?
++
match_lvl
:
0
;
if
(
match_lvl
==
3
)
{
if
(
match_lvl
==
4
)
{
/* perfect-fit - return the result */
/* perfect-fit - return the result */
return
entry
;
return
entry
;
}
else
if
(
match_lvl
>
last_lvl
)
{
}
else
if
(
match_lvl
>
last_lvl
)
{
...
@@ -873,72 +874,68 @@ static void check_for_illegal_area(struct device *dev, void *addr, u64 size)
...
@@ -873,72 +874,68 @@ static void check_for_illegal_area(struct device *dev, void *addr, u64 size)
"[addr=%p] [size=%llu]
\n
"
,
addr
,
size
);
"[addr=%p] [size=%llu]
\n
"
,
addr
,
size
);
}
}
static
void
check_sync
(
struct
device
*
dev
,
dma_addr_t
addr
,
static
void
check_sync
(
struct
device
*
dev
,
u64
size
,
u64
offset
,
int
direction
,
bool
to_cpu
)
struct
dma_debug_entry
*
ref
,
bool
to_cpu
)
{
{
struct
dma_debug_entry
ref
=
{
.
dev
=
dev
,
.
dev_addr
=
addr
,
.
size
=
size
,
.
direction
=
direction
,
};
struct
dma_debug_entry
*
entry
;
struct
dma_debug_entry
*
entry
;
struct
hash_bucket
*
bucket
;
struct
hash_bucket
*
bucket
;
unsigned
long
flags
;
unsigned
long
flags
;
bucket
=
get_hash_bucket
(
&
ref
,
&
flags
);
bucket
=
get_hash_bucket
(
ref
,
&
flags
);
entry
=
hash_bucket_find
(
bucket
,
&
ref
);
entry
=
hash_bucket_find
(
bucket
,
ref
);
if
(
!
entry
)
{
if
(
!
entry
)
{
err_printk
(
dev
,
NULL
,
"DMA-API: device driver tries "
err_printk
(
dev
,
NULL
,
"DMA-API: device driver tries "
"to sync DMA memory it has not allocated "
"to sync DMA memory it has not allocated "
"[device address=0x%016llx] [size=%llu bytes]
\n
"
,
"[device address=0x%016llx] [size=%llu bytes]
\n
"
,
(
unsigned
long
long
)
addr
,
size
);
(
unsigned
long
long
)
ref
->
dev_addr
,
ref
->
size
);
goto
out
;
goto
out
;
}
}
if
(
(
offset
+
size
)
>
entry
->
size
)
{
if
(
ref
->
size
>
entry
->
size
)
{
err_printk
(
dev
,
entry
,
"DMA-API: device driver syncs"
err_printk
(
dev
,
entry
,
"DMA-API: device driver syncs"
" DMA memory outside allocated range "
" DMA memory outside allocated range "
"[device address=0x%016llx] "
"[device address=0x%016llx] "
"[allocation size=%llu bytes] [sync offset=%llu] "
"[allocation size=%llu bytes] "
"[sync size=%llu]
\n
"
,
entry
->
dev_addr
,
entry
->
size
,
"[sync offset+size=%llu]
\n
"
,
offset
,
size
);
entry
->
dev_addr
,
entry
->
size
,
ref
->
size
);
}
}
if
(
direction
!=
entry
->
direction
)
{
if
(
ref
->
direction
!=
entry
->
direction
)
{
err_printk
(
dev
,
entry
,
"DMA-API: device driver syncs "
err_printk
(
dev
,
entry
,
"DMA-API: device driver syncs "
"DMA memory with different direction "
"DMA memory with different direction "
"[device address=0x%016llx] [size=%llu bytes] "
"[device address=0x%016llx] [size=%llu bytes] "
"[mapped with %s] [synced with %s]
\n
"
,
"[mapped with %s] [synced with %s]
\n
"
,
(
unsigned
long
long
)
addr
,
entry
->
size
,
(
unsigned
long
long
)
ref
->
dev_
addr
,
entry
->
size
,
dir2name
[
entry
->
direction
],
dir2name
[
entry
->
direction
],
dir2name
[
direction
]);
dir2name
[
ref
->
direction
]);
}
}
if
(
entry
->
direction
==
DMA_BIDIRECTIONAL
)
if
(
entry
->
direction
==
DMA_BIDIRECTIONAL
)
goto
out
;
goto
out
;
if
(
to_cpu
&&
!
(
entry
->
direction
==
DMA_FROM_DEVICE
)
&&
if
(
to_cpu
&&
!
(
entry
->
direction
==
DMA_FROM_DEVICE
)
&&
!
(
direction
==
DMA_TO_DEVICE
))
!
(
ref
->
direction
==
DMA_TO_DEVICE
))
err_printk
(
dev
,
entry
,
"DMA-API: device driver syncs "
err_printk
(
dev
,
entry
,
"DMA-API: device driver syncs "
"device read-only DMA memory for cpu "
"device read-only DMA memory for cpu "
"[device address=0x%016llx] [size=%llu bytes] "
"[device address=0x%016llx] [size=%llu bytes] "
"[mapped with %s] [synced with %s]
\n
"
,
"[mapped with %s] [synced with %s]
\n
"
,
(
unsigned
long
long
)
addr
,
entry
->
size
,
(
unsigned
long
long
)
ref
->
dev_
addr
,
entry
->
size
,
dir2name
[
entry
->
direction
],
dir2name
[
entry
->
direction
],
dir2name
[
direction
]);
dir2name
[
ref
->
direction
]);
if
(
!
to_cpu
&&
!
(
entry
->
direction
==
DMA_TO_DEVICE
)
&&
if
(
!
to_cpu
&&
!
(
entry
->
direction
==
DMA_TO_DEVICE
)
&&
!
(
direction
==
DMA_FROM_DEVICE
))
!
(
ref
->
direction
==
DMA_FROM_DEVICE
))
err_printk
(
dev
,
entry
,
"DMA-API: device driver syncs "
err_printk
(
dev
,
entry
,
"DMA-API: device driver syncs "
"device write-only DMA memory to device "
"device write-only DMA memory to device "
"[device address=0x%016llx] [size=%llu bytes] "
"[device address=0x%016llx] [size=%llu bytes] "
"[mapped with %s] [synced with %s]
\n
"
,
"[mapped with %s] [synced with %s]
\n
"
,
(
unsigned
long
long
)
addr
,
entry
->
size
,
(
unsigned
long
long
)
ref
->
dev_
addr
,
entry
->
size
,
dir2name
[
entry
->
direction
],
dir2name
[
entry
->
direction
],
dir2name
[
direction
]);
dir2name
[
ref
->
direction
]);
out:
out:
put_hash_bucket
(
bucket
,
&
flags
);
put_hash_bucket
(
bucket
,
&
flags
);
...
@@ -1036,19 +1033,16 @@ void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,
...
@@ -1036,19 +1033,16 @@ void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,
}
}
EXPORT_SYMBOL
(
debug_dma_map_sg
);
EXPORT_SYMBOL
(
debug_dma_map_sg
);
static
int
get_nr_mapped_entries
(
struct
device
*
dev
,
struct
scatterlist
*
s
)
static
int
get_nr_mapped_entries
(
struct
device
*
dev
,
struct
dma_debug_entry
*
ref
)
{
{
struct
dma_debug_entry
*
entry
,
ref
;
struct
dma_debug_entry
*
entry
;
struct
hash_bucket
*
bucket
;
struct
hash_bucket
*
bucket
;
unsigned
long
flags
;
unsigned
long
flags
;
int
mapped_ents
;
int
mapped_ents
;
ref
.
dev
=
dev
;
bucket
=
get_hash_bucket
(
ref
,
&
flags
);
ref
.
dev_addr
=
sg_dma_address
(
s
);
entry
=
hash_bucket_find
(
bucket
,
ref
);
ref
.
size
=
sg_dma_len
(
s
),
bucket
=
get_hash_bucket
(
&
ref
,
&
flags
);
entry
=
hash_bucket_find
(
bucket
,
&
ref
);
mapped_ents
=
0
;
mapped_ents
=
0
;
if
(
entry
)
if
(
entry
)
...
@@ -1076,16 +1070,14 @@ void debug_dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
...
@@ -1076,16 +1070,14 @@ void debug_dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
.
dev_addr
=
sg_dma_address
(
s
),
.
dev_addr
=
sg_dma_address
(
s
),
.
size
=
sg_dma_len
(
s
),
.
size
=
sg_dma_len
(
s
),
.
direction
=
dir
,
.
direction
=
dir
,
.
sg_call_ents
=
0
,
.
sg_call_ents
=
nelems
,
};
};
if
(
mapped_ents
&&
i
>=
mapped_ents
)
if
(
mapped_ents
&&
i
>=
mapped_ents
)
break
;
break
;
if
(
!
i
)
{
if
(
!
i
)
ref
.
sg_call_ents
=
nelems
;
mapped_ents
=
get_nr_mapped_entries
(
dev
,
&
ref
);
mapped_ents
=
get_nr_mapped_entries
(
dev
,
s
);
}
check_unmap
(
&
ref
);
check_unmap
(
&
ref
);
}
}
...
@@ -1140,10 +1132,19 @@ EXPORT_SYMBOL(debug_dma_free_coherent);
...
@@ -1140,10 +1132,19 @@ EXPORT_SYMBOL(debug_dma_free_coherent);
void
debug_dma_sync_single_for_cpu
(
struct
device
*
dev
,
dma_addr_t
dma_handle
,
void
debug_dma_sync_single_for_cpu
(
struct
device
*
dev
,
dma_addr_t
dma_handle
,
size_t
size
,
int
direction
)
size_t
size
,
int
direction
)
{
{
struct
dma_debug_entry
ref
;
if
(
unlikely
(
global_disable
))
if
(
unlikely
(
global_disable
))
return
;
return
;
check_sync
(
dev
,
dma_handle
,
size
,
0
,
direction
,
true
);
ref
.
type
=
dma_debug_single
;
ref
.
dev
=
dev
;
ref
.
dev_addr
=
dma_handle
;
ref
.
size
=
size
;
ref
.
direction
=
direction
;
ref
.
sg_call_ents
=
0
;
check_sync
(
dev
,
&
ref
,
true
);
}
}
EXPORT_SYMBOL
(
debug_dma_sync_single_for_cpu
);
EXPORT_SYMBOL
(
debug_dma_sync_single_for_cpu
);
...
@@ -1151,10 +1152,19 @@ void debug_dma_sync_single_for_device(struct device *dev,
...
@@ -1151,10 +1152,19 @@ void debug_dma_sync_single_for_device(struct device *dev,
dma_addr_t
dma_handle
,
size_t
size
,
dma_addr_t
dma_handle
,
size_t
size
,
int
direction
)
int
direction
)
{
{
struct
dma_debug_entry
ref
;
if
(
unlikely
(
global_disable
))
if
(
unlikely
(
global_disable
))
return
;
return
;
check_sync
(
dev
,
dma_handle
,
size
,
0
,
direction
,
false
);
ref
.
type
=
dma_debug_single
;
ref
.
dev
=
dev
;
ref
.
dev_addr
=
dma_handle
;
ref
.
size
=
size
;
ref
.
direction
=
direction
;
ref
.
sg_call_ents
=
0
;
check_sync
(
dev
,
&
ref
,
false
);
}
}
EXPORT_SYMBOL
(
debug_dma_sync_single_for_device
);
EXPORT_SYMBOL
(
debug_dma_sync_single_for_device
);
...
@@ -1163,10 +1173,19 @@ void debug_dma_sync_single_range_for_cpu(struct device *dev,
...
@@ -1163,10 +1173,19 @@ void debug_dma_sync_single_range_for_cpu(struct device *dev,
unsigned
long
offset
,
size_t
size
,
unsigned
long
offset
,
size_t
size
,
int
direction
)
int
direction
)
{
{
struct
dma_debug_entry
ref
;
if
(
unlikely
(
global_disable
))
if
(
unlikely
(
global_disable
))
return
;
return
;
check_sync
(
dev
,
dma_handle
,
size
,
offset
,
direction
,
true
);
ref
.
type
=
dma_debug_single
;
ref
.
dev
=
dev
;
ref
.
dev_addr
=
dma_handle
;
ref
.
size
=
offset
+
size
;
ref
.
direction
=
direction
;
ref
.
sg_call_ents
=
0
;
check_sync
(
dev
,
&
ref
,
true
);
}
}
EXPORT_SYMBOL
(
debug_dma_sync_single_range_for_cpu
);
EXPORT_SYMBOL
(
debug_dma_sync_single_range_for_cpu
);
...
@@ -1175,10 +1194,19 @@ void debug_dma_sync_single_range_for_device(struct device *dev,
...
@@ -1175,10 +1194,19 @@ void debug_dma_sync_single_range_for_device(struct device *dev,
unsigned
long
offset
,
unsigned
long
offset
,
size_t
size
,
int
direction
)
size_t
size
,
int
direction
)
{
{
struct
dma_debug_entry
ref
;
if
(
unlikely
(
global_disable
))
if
(
unlikely
(
global_disable
))
return
;
return
;
check_sync
(
dev
,
dma_handle
,
size
,
offset
,
direction
,
false
);
ref
.
type
=
dma_debug_single
;
ref
.
dev
=
dev
;
ref
.
dev_addr
=
dma_handle
;
ref
.
size
=
offset
+
size
;
ref
.
direction
=
direction
;
ref
.
sg_call_ents
=
0
;
check_sync
(
dev
,
&
ref
,
false
);
}
}
EXPORT_SYMBOL
(
debug_dma_sync_single_range_for_device
);
EXPORT_SYMBOL
(
debug_dma_sync_single_range_for_device
);
...
@@ -1192,14 +1220,24 @@ void debug_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
...
@@ -1192,14 +1220,24 @@ void debug_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
return
;
return
;
for_each_sg
(
sg
,
s
,
nelems
,
i
)
{
for_each_sg
(
sg
,
s
,
nelems
,
i
)
{
struct
dma_debug_entry
ref
=
{
.
type
=
dma_debug_sg
,
.
dev
=
dev
,
.
paddr
=
sg_phys
(
s
),
.
dev_addr
=
sg_dma_address
(
s
),
.
size
=
sg_dma_len
(
s
),
.
direction
=
direction
,
.
sg_call_ents
=
nelems
,
};
if
(
!
i
)
if
(
!
i
)
mapped_ents
=
get_nr_mapped_entries
(
dev
,
s
);
mapped_ents
=
get_nr_mapped_entries
(
dev
,
&
ref
);
if
(
i
>=
mapped_ents
)
if
(
i
>=
mapped_ents
)
break
;
break
;
check_sync
(
dev
,
sg_dma_address
(
s
),
sg_dma_len
(
s
),
0
,
check_sync
(
dev
,
&
ref
,
true
);
direction
,
true
);
}
}
}
}
EXPORT_SYMBOL
(
debug_dma_sync_sg_for_cpu
);
EXPORT_SYMBOL
(
debug_dma_sync_sg_for_cpu
);
...
@@ -1214,14 +1252,23 @@ void debug_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
...
@@ -1214,14 +1252,23 @@ void debug_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
return
;
return
;
for_each_sg
(
sg
,
s
,
nelems
,
i
)
{
for_each_sg
(
sg
,
s
,
nelems
,
i
)
{
struct
dma_debug_entry
ref
=
{
.
type
=
dma_debug_sg
,
.
dev
=
dev
,
.
paddr
=
sg_phys
(
s
),
.
dev_addr
=
sg_dma_address
(
s
),
.
size
=
sg_dma_len
(
s
),
.
direction
=
direction
,
.
sg_call_ents
=
nelems
,
};
if
(
!
i
)
if
(
!
i
)
mapped_ents
=
get_nr_mapped_entries
(
dev
,
s
);
mapped_ents
=
get_nr_mapped_entries
(
dev
,
&
ref
);
if
(
i
>=
mapped_ents
)
if
(
i
>=
mapped_ents
)
break
;
break
;
check_sync
(
dev
,
sg_dma_address
(
s
),
sg_dma_len
(
s
),
0
,
check_sync
(
dev
,
&
ref
,
false
);
direction
,
false
);
}
}
}
}
EXPORT_SYMBOL
(
debug_dma_sync_sg_for_device
);
EXPORT_SYMBOL
(
debug_dma_sync_sg_for_device
);
...
...
mm/memory.c
浏览文件 @
b7f797cb
...
@@ -1310,8 +1310,9 @@ int __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
...
@@ -1310,8 +1310,9 @@ int __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
cond_resched
();
cond_resched
();
while
(
!
(
page
=
follow_page
(
vma
,
start
,
foll_flags
)))
{
while
(
!
(
page
=
follow_page
(
vma
,
start
,
foll_flags
)))
{
int
ret
;
int
ret
;
ret
=
handle_mm_fault
(
mm
,
vma
,
start
,
foll_flags
&
FOLL_WRITE
);
/* FOLL_WRITE matches FAULT_FLAG_WRITE! */
ret
=
handle_mm_fault
(
mm
,
vma
,
start
,
foll_flags
&
FOLL_WRITE
);
if
(
ret
&
VM_FAULT_ERROR
)
{
if
(
ret
&
VM_FAULT_ERROR
)
{
if
(
ret
&
VM_FAULT_OOM
)
if
(
ret
&
VM_FAULT_OOM
)
return
i
?
i
:
-
ENOMEM
;
return
i
?
i
:
-
ENOMEM
;
...
@@ -2496,7 +2497,7 @@ int vmtruncate_range(struct inode *inode, loff_t offset, loff_t end)
...
@@ -2496,7 +2497,7 @@ int vmtruncate_range(struct inode *inode, loff_t offset, loff_t end)
*/
*/
static
int
do_swap_page
(
struct
mm_struct
*
mm
,
struct
vm_area_struct
*
vma
,
static
int
do_swap_page
(
struct
mm_struct
*
mm
,
struct
vm_area_struct
*
vma
,
unsigned
long
address
,
pte_t
*
page_table
,
pmd_t
*
pmd
,
unsigned
long
address
,
pte_t
*
page_table
,
pmd_t
*
pmd
,
int
write_acces
s
,
pte_t
orig_pte
)
unsigned
int
flag
s
,
pte_t
orig_pte
)
{
{
spinlock_t
*
ptl
;
spinlock_t
*
ptl
;
struct
page
*
page
;
struct
page
*
page
;
...
@@ -2572,9 +2573,9 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
...
@@ -2572,9 +2573,9 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
inc_mm_counter
(
mm
,
anon_rss
);
inc_mm_counter
(
mm
,
anon_rss
);
pte
=
mk_pte
(
page
,
vma
->
vm_page_prot
);
pte
=
mk_pte
(
page
,
vma
->
vm_page_prot
);
if
(
write_access
&&
reuse_swap_page
(
page
))
{
if
(
(
flags
&
FAULT_FLAG_WRITE
)
&&
reuse_swap_page
(
page
))
{
pte
=
maybe_mkwrite
(
pte_mkdirty
(
pte
),
vma
);
pte
=
maybe_mkwrite
(
pte_mkdirty
(
pte
),
vma
);
write_access
=
0
;
flags
&=
~
FAULT_FLAG_WRITE
;
}
}
flush_icache_page
(
vma
,
page
);
flush_icache_page
(
vma
,
page
);
set_pte_at
(
mm
,
address
,
page_table
,
pte
);
set_pte_at
(
mm
,
address
,
page_table
,
pte
);
...
@@ -2587,7 +2588,7 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
...
@@ -2587,7 +2588,7 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
try_to_free_swap
(
page
);
try_to_free_swap
(
page
);
unlock_page
(
page
);
unlock_page
(
page
);
if
(
write_access
)
{
if
(
flags
&
FAULT_FLAG_WRITE
)
{
ret
|=
do_wp_page
(
mm
,
vma
,
address
,
page_table
,
pmd
,
ptl
,
pte
);
ret
|=
do_wp_page
(
mm
,
vma
,
address
,
page_table
,
pmd
,
ptl
,
pte
);
if
(
ret
&
VM_FAULT_ERROR
)
if
(
ret
&
VM_FAULT_ERROR
)
ret
&=
VM_FAULT_ERROR
;
ret
&=
VM_FAULT_ERROR
;
...
@@ -2616,7 +2617,7 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
...
@@ -2616,7 +2617,7 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
*/
*/
static
int
do_anonymous_page
(
struct
mm_struct
*
mm
,
struct
vm_area_struct
*
vma
,
static
int
do_anonymous_page
(
struct
mm_struct
*
mm
,
struct
vm_area_struct
*
vma
,
unsigned
long
address
,
pte_t
*
page_table
,
pmd_t
*
pmd
,
unsigned
long
address
,
pte_t
*
page_table
,
pmd_t
*
pmd
,
int
write_acces
s
)
unsigned
int
flag
s
)
{
{
struct
page
*
page
;
struct
page
*
page
;
spinlock_t
*
ptl
;
spinlock_t
*
ptl
;
...
@@ -2776,7 +2777,7 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
...
@@ -2776,7 +2777,7 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
* due to the bad i386 page protection. But it's valid
* due to the bad i386 page protection. But it's valid
* for other architectures too.
* for other architectures too.
*
*
* Note that if
write_access is true
, we either now have
* Note that if
FAULT_FLAG_WRITE is set
, we either now have
* an exclusive copy of the page, or this is a shared mapping,
* an exclusive copy of the page, or this is a shared mapping,
* so we can make it writable and dirty to avoid having to
* so we can make it writable and dirty to avoid having to
* handle that later.
* handle that later.
...
@@ -2847,11 +2848,10 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
...
@@ -2847,11 +2848,10 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
static
int
do_linear_fault
(
struct
mm_struct
*
mm
,
struct
vm_area_struct
*
vma
,
static
int
do_linear_fault
(
struct
mm_struct
*
mm
,
struct
vm_area_struct
*
vma
,
unsigned
long
address
,
pte_t
*
page_table
,
pmd_t
*
pmd
,
unsigned
long
address
,
pte_t
*
page_table
,
pmd_t
*
pmd
,
int
write_acces
s
,
pte_t
orig_pte
)
unsigned
int
flag
s
,
pte_t
orig_pte
)
{
{
pgoff_t
pgoff
=
(((
address
&
PAGE_MASK
)
pgoff_t
pgoff
=
(((
address
&
PAGE_MASK
)
-
vma
->
vm_start
)
>>
PAGE_SHIFT
)
+
vma
->
vm_pgoff
;
-
vma
->
vm_start
)
>>
PAGE_SHIFT
)
+
vma
->
vm_pgoff
;
unsigned
int
flags
=
(
write_access
?
FAULT_FLAG_WRITE
:
0
);
pte_unmap
(
page_table
);
pte_unmap
(
page_table
);
return
__do_fault
(
mm
,
vma
,
address
,
pmd
,
pgoff
,
flags
,
orig_pte
);
return
__do_fault
(
mm
,
vma
,
address
,
pmd
,
pgoff
,
flags
,
orig_pte
);
...
@@ -2868,12 +2868,12 @@ static int do_linear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
...
@@ -2868,12 +2868,12 @@ static int do_linear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
*/
*/
static
int
do_nonlinear_fault
(
struct
mm_struct
*
mm
,
struct
vm_area_struct
*
vma
,
static
int
do_nonlinear_fault
(
struct
mm_struct
*
mm
,
struct
vm_area_struct
*
vma
,
unsigned
long
address
,
pte_t
*
page_table
,
pmd_t
*
pmd
,
unsigned
long
address
,
pte_t
*
page_table
,
pmd_t
*
pmd
,
int
write_acces
s
,
pte_t
orig_pte
)
unsigned
int
flag
s
,
pte_t
orig_pte
)
{
{
unsigned
int
flags
=
FAULT_FLAG_NONLINEAR
|
(
write_access
?
FAULT_FLAG_WRITE
:
0
);
pgoff_t
pgoff
;
pgoff_t
pgoff
;
flags
|=
FAULT_FLAG_NONLINEAR
;
if
(
!
pte_unmap_same
(
mm
,
pmd
,
page_table
,
orig_pte
))
if
(
!
pte_unmap_same
(
mm
,
pmd
,
page_table
,
orig_pte
))
return
0
;
return
0
;
...
@@ -2904,7 +2904,7 @@ static int do_nonlinear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
...
@@ -2904,7 +2904,7 @@ static int do_nonlinear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
*/
*/
static
inline
int
handle_pte_fault
(
struct
mm_struct
*
mm
,
static
inline
int
handle_pte_fault
(
struct
mm_struct
*
mm
,
struct
vm_area_struct
*
vma
,
unsigned
long
address
,
struct
vm_area_struct
*
vma
,
unsigned
long
address
,
pte_t
*
pte
,
pmd_t
*
pmd
,
int
write_acces
s
)
pte_t
*
pte
,
pmd_t
*
pmd
,
unsigned
int
flag
s
)
{
{
pte_t
entry
;
pte_t
entry
;
spinlock_t
*
ptl
;
spinlock_t
*
ptl
;
...
@@ -2915,30 +2915,30 @@ static inline int handle_pte_fault(struct mm_struct *mm,
...
@@ -2915,30 +2915,30 @@ static inline int handle_pte_fault(struct mm_struct *mm,
if
(
vma
->
vm_ops
)
{
if
(
vma
->
vm_ops
)
{
if
(
likely
(
vma
->
vm_ops
->
fault
))
if
(
likely
(
vma
->
vm_ops
->
fault
))
return
do_linear_fault
(
mm
,
vma
,
address
,
return
do_linear_fault
(
mm
,
vma
,
address
,
pte
,
pmd
,
write_acces
s
,
entry
);
pte
,
pmd
,
flag
s
,
entry
);
}
}
return
do_anonymous_page
(
mm
,
vma
,
address
,
return
do_anonymous_page
(
mm
,
vma
,
address
,
pte
,
pmd
,
write_acces
s
);
pte
,
pmd
,
flag
s
);
}
}
if
(
pte_file
(
entry
))
if
(
pte_file
(
entry
))
return
do_nonlinear_fault
(
mm
,
vma
,
address
,
return
do_nonlinear_fault
(
mm
,
vma
,
address
,
pte
,
pmd
,
write_acces
s
,
entry
);
pte
,
pmd
,
flag
s
,
entry
);
return
do_swap_page
(
mm
,
vma
,
address
,
return
do_swap_page
(
mm
,
vma
,
address
,
pte
,
pmd
,
write_acces
s
,
entry
);
pte
,
pmd
,
flag
s
,
entry
);
}
}
ptl
=
pte_lockptr
(
mm
,
pmd
);
ptl
=
pte_lockptr
(
mm
,
pmd
);
spin_lock
(
ptl
);
spin_lock
(
ptl
);
if
(
unlikely
(
!
pte_same
(
*
pte
,
entry
)))
if
(
unlikely
(
!
pte_same
(
*
pte
,
entry
)))
goto
unlock
;
goto
unlock
;
if
(
write_access
)
{
if
(
flags
&
FAULT_FLAG_WRITE
)
{
if
(
!
pte_write
(
entry
))
if
(
!
pte_write
(
entry
))
return
do_wp_page
(
mm
,
vma
,
address
,
return
do_wp_page
(
mm
,
vma
,
address
,
pte
,
pmd
,
ptl
,
entry
);
pte
,
pmd
,
ptl
,
entry
);
entry
=
pte_mkdirty
(
entry
);
entry
=
pte_mkdirty
(
entry
);
}
}
entry
=
pte_mkyoung
(
entry
);
entry
=
pte_mkyoung
(
entry
);
if
(
ptep_set_access_flags
(
vma
,
address
,
pte
,
entry
,
write_access
))
{
if
(
ptep_set_access_flags
(
vma
,
address
,
pte
,
entry
,
flags
&
FAULT_FLAG_WRITE
))
{
update_mmu_cache
(
vma
,
address
,
entry
);
update_mmu_cache
(
vma
,
address
,
entry
);
}
else
{
}
else
{
/*
/*
...
@@ -2947,7 +2947,7 @@ static inline int handle_pte_fault(struct mm_struct *mm,
...
@@ -2947,7 +2947,7 @@ static inline int handle_pte_fault(struct mm_struct *mm,
* This still avoids useless tlb flushes for .text page faults
* This still avoids useless tlb flushes for .text page faults
* with threads.
* with threads.
*/
*/
if
(
write_access
)
if
(
flags
&
FAULT_FLAG_WRITE
)
flush_tlb_page
(
vma
,
address
);
flush_tlb_page
(
vma
,
address
);
}
}
unlock:
unlock:
...
@@ -2959,7 +2959,7 @@ static inline int handle_pte_fault(struct mm_struct *mm,
...
@@ -2959,7 +2959,7 @@ static inline int handle_pte_fault(struct mm_struct *mm,
* By the time we get here, we already hold the mm semaphore
* By the time we get here, we already hold the mm semaphore
*/
*/
int
handle_mm_fault
(
struct
mm_struct
*
mm
,
struct
vm_area_struct
*
vma
,
int
handle_mm_fault
(
struct
mm_struct
*
mm
,
struct
vm_area_struct
*
vma
,
unsigned
long
address
,
int
write_acces
s
)
unsigned
long
address
,
unsigned
int
flag
s
)
{
{
pgd_t
*
pgd
;
pgd_t
*
pgd
;
pud_t
*
pud
;
pud_t
*
pud
;
...
@@ -2971,7 +2971,7 @@ int handle_mm_fault(struct mm_struct *mm, struct vm_area_struct *vma,
...
@@ -2971,7 +2971,7 @@ int handle_mm_fault(struct mm_struct *mm, struct vm_area_struct *vma,
count_vm_event
(
PGFAULT
);
count_vm_event
(
PGFAULT
);
if
(
unlikely
(
is_vm_hugetlb_page
(
vma
)))
if
(
unlikely
(
is_vm_hugetlb_page
(
vma
)))
return
hugetlb_fault
(
mm
,
vma
,
address
,
write_acces
s
);
return
hugetlb_fault
(
mm
,
vma
,
address
,
flag
s
);
pgd
=
pgd_offset
(
mm
,
address
);
pgd
=
pgd_offset
(
mm
,
address
);
pud
=
pud_alloc
(
mm
,
pgd
,
address
);
pud
=
pud_alloc
(
mm
,
pgd
,
address
);
...
@@ -2984,7 +2984,7 @@ int handle_mm_fault(struct mm_struct *mm, struct vm_area_struct *vma,
...
@@ -2984,7 +2984,7 @@ int handle_mm_fault(struct mm_struct *mm, struct vm_area_struct *vma,
if
(
!
pte
)
if
(
!
pte
)
return
VM_FAULT_OOM
;
return
VM_FAULT_OOM
;
return
handle_pte_fault
(
mm
,
vma
,
address
,
pte
,
pmd
,
write_acces
s
);
return
handle_pte_fault
(
mm
,
vma
,
address
,
pte
,
pmd
,
flag
s
);
}
}
#ifndef __PAGETABLE_PUD_FOLDED
#ifndef __PAGETABLE_PUD_FOLDED
...
...
mm/percpu.c
浏览文件 @
b7f797cb
...
@@ -549,14 +549,14 @@ static void pcpu_free_area(struct pcpu_chunk *chunk, int freeme)
...
@@ -549,14 +549,14 @@ static void pcpu_free_area(struct pcpu_chunk *chunk, int freeme)
* @chunk: chunk of interest
* @chunk: chunk of interest
* @page_start: page index of the first page to unmap
* @page_start: page index of the first page to unmap
* @page_end: page index of the last page to unmap + 1
* @page_end: page index of the last page to unmap + 1
* @flush
: whether to flush cache and
tlb or not
* @flush
_tlb: whether to flush
tlb or not
*
*
* For each cpu, unmap pages [@page_start,@page_end) out of @chunk.
* For each cpu, unmap pages [@page_start,@page_end) out of @chunk.
* If @flush is true, vcache is flushed before unmapping and tlb
* If @flush is true, vcache is flushed before unmapping and tlb
* after.
* after.
*/
*/
static
void
pcpu_unmap
(
struct
pcpu_chunk
*
chunk
,
int
page_start
,
int
page_end
,
static
void
pcpu_unmap
(
struct
pcpu_chunk
*
chunk
,
int
page_start
,
int
page_end
,
bool
flush
)
bool
flush
_tlb
)
{
{
unsigned
int
last
=
num_possible_cpus
()
-
1
;
unsigned
int
last
=
num_possible_cpus
()
-
1
;
unsigned
int
cpu
;
unsigned
int
cpu
;
...
@@ -569,7 +569,6 @@ static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end,
...
@@ -569,7 +569,6 @@ static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end,
* the whole region at once rather than doing it for each cpu.
* the whole region at once rather than doing it for each cpu.
* This could be an overkill but is more scalable.
* This could be an overkill but is more scalable.
*/
*/
if
(
flush
)
flush_cache_vunmap
(
pcpu_chunk_addr
(
chunk
,
0
,
page_start
),
flush_cache_vunmap
(
pcpu_chunk_addr
(
chunk
,
0
,
page_start
),
pcpu_chunk_addr
(
chunk
,
last
,
page_end
));
pcpu_chunk_addr
(
chunk
,
last
,
page_end
));
...
@@ -579,7 +578,7 @@ static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end,
...
@@ -579,7 +578,7 @@ static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end,
(
page_end
-
page_start
)
<<
PAGE_SHIFT
);
(
page_end
-
page_start
)
<<
PAGE_SHIFT
);
/* ditto as flush_cache_vunmap() */
/* ditto as flush_cache_vunmap() */
if
(
flush
)
if
(
flush
_tlb
)
flush_tlb_kernel_range
(
pcpu_chunk_addr
(
chunk
,
0
,
page_start
),
flush_tlb_kernel_range
(
pcpu_chunk_addr
(
chunk
,
0
,
page_start
),
pcpu_chunk_addr
(
chunk
,
last
,
page_end
));
pcpu_chunk_addr
(
chunk
,
last
,
page_end
));
}
}
...
@@ -1234,6 +1233,7 @@ static struct page * __init pcpue_get_page(unsigned int cpu, int pageno)
...
@@ -1234,6 +1233,7 @@ static struct page * __init pcpue_get_page(unsigned int cpu, int pageno)
ssize_t
__init
pcpu_embed_first_chunk
(
size_t
static_size
,
size_t
reserved_size
,
ssize_t
__init
pcpu_embed_first_chunk
(
size_t
static_size
,
size_t
reserved_size
,
ssize_t
dyn_size
,
ssize_t
unit_size
)
ssize_t
dyn_size
,
ssize_t
unit_size
)
{
{
size_t
chunk_size
;
unsigned
int
cpu
;
unsigned
int
cpu
;
/* determine parameters and allocate */
/* determine parameters and allocate */
...
@@ -1248,11 +1248,15 @@ ssize_t __init pcpu_embed_first_chunk(size_t static_size, size_t reserved_size,
...
@@ -1248,11 +1248,15 @@ ssize_t __init pcpu_embed_first_chunk(size_t static_size, size_t reserved_size,
}
else
}
else
pcpue_unit_size
=
max_t
(
size_t
,
pcpue_size
,
PCPU_MIN_UNIT_SIZE
);
pcpue_unit_size
=
max_t
(
size_t
,
pcpue_size
,
PCPU_MIN_UNIT_SIZE
);
pcpue_ptr
=
__alloc_bootmem_nopanic
(
chunk_size
=
pcpue_unit_size
*
num_possible_cpus
();
num_possible_cpus
()
*
pcpue_unit_size
,
PAGE_SIZE
,
__pa
(
MAX_DMA_ADDRESS
));
pcpue_ptr
=
__alloc_bootmem_nopanic
(
chunk_size
,
PAGE_SIZE
,
if
(
!
pcpue_ptr
)
__pa
(
MAX_DMA_ADDRESS
));
if
(
!
pcpue_ptr
)
{
pr_warning
(
"PERCPU: failed to allocate %zu bytes for "
"embedding
\n
"
,
chunk_size
);
return
-
ENOMEM
;
return
-
ENOMEM
;
}
/* return the leftover and copy */
/* return the leftover and copy */
for_each_possible_cpu
(
cpu
)
{
for_each_possible_cpu
(
cpu
)
{
...
...
sound/pci/hda/hda_codec.c
浏览文件 @
b7f797cb
...
@@ -972,8 +972,6 @@ int /*__devinit*/ snd_hda_codec_new(struct hda_bus *bus, unsigned int codec_addr
...
@@ -972,8 +972,6 @@ int /*__devinit*/ snd_hda_codec_new(struct hda_bus *bus, unsigned int codec_addr
snd_hda_codec_read
(
codec
,
nid
,
0
,
snd_hda_codec_read
(
codec
,
nid
,
0
,
AC_VERB_GET_SUBSYSTEM_ID
,
0
);
AC_VERB_GET_SUBSYSTEM_ID
,
0
);
}
}
if
(
bus
->
modelname
)
codec
->
modelname
=
kstrdup
(
bus
->
modelname
,
GFP_KERNEL
);
/* power-up all before initialization */
/* power-up all before initialization */
hda_set_power_state
(
codec
,
hda_set_power_state
(
codec
,
...
...
sound/pci/hda/patch_realtek.c
浏览文件 @
b7f797cb
...
@@ -224,6 +224,7 @@ enum {
...
@@ -224,6 +224,7 @@ enum {
ALC883_ACER,
ALC883_ACER,
ALC883_ACER_ASPIRE,
ALC883_ACER_ASPIRE,
ALC888_ACER_ASPIRE_4930G,
ALC888_ACER_ASPIRE_4930G,
ALC888_ACER_ASPIRE_6530G,
ALC888_ACER_ASPIRE_8930G,
ALC888_ACER_ASPIRE_8930G,
ALC883_MEDION,
ALC883_MEDION,
ALC883_MEDION_MD2,
ALC883_MEDION_MD2,
...
@@ -970,7 +971,7 @@ static void alc_automute_pin(struct hda_codec *codec)
...
@@ -970,7 +971,7 @@ static void alc_automute_pin(struct hda_codec *codec)
}
}
}
}
#if 0 /* it's broken in some
ac
ses -- temporarily disabled */
#if 0 /* it's broken in some
ca
ses -- temporarily disabled */
static void alc_mic_automute(struct hda_codec *codec)
static void alc_mic_automute(struct hda_codec *codec)
{
{
struct alc_spec *spec = codec->spec;
struct alc_spec *spec = codec->spec;
...
@@ -1170,7 +1171,7 @@ static int alc_subsystem_id(struct hda_codec *codec,
...
@@ -1170,7 +1171,7 @@ static int alc_subsystem_id(struct hda_codec *codec,
/* invalid SSID, check the special NID pin defcfg instead */
/* invalid SSID, check the special NID pin defcfg instead */
/*
/*
* 31~30 : port con
etc
ivity
* 31~30 : port con
nect
ivity
* 29~21 : reserve
* 29~21 : reserve
* 20 : PCBEEP input
* 20 : PCBEEP input
* 19~16 : Check sum (15:1)
* 19~16 : Check sum (15:1)
...
@@ -1470,6 +1471,25 @@ static struct hda_verb alc888_acer_aspire_4930g_verbs[] = {
...
@@ -1470,6 +1471,25 @@ static struct hda_verb alc888_acer_aspire_4930g_verbs[] = {
{ }
{ }
};
};
/*
* ALC888 Acer Aspire 6530G model
*/
static struct hda_verb alc888_acer_aspire_6530g_verbs[] = {
/* Bias voltage on for external mic port */
{0x18, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_IN | PIN_VREF80},
/* Enable unsolicited event for HP jack */
{0x15, AC_VERB_SET_UNSOLICITED_ENABLE, ALC880_HP_EVENT | AC_USRSP_EN},
/* Enable speaker output */
{0x14, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT},
{0x14, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE},
/* Enable headphone output */
{0x15, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT | PIN_HP},
{0x15, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE},
{0x15, AC_VERB_SET_CONNECT_SEL, 0x00},
{ }
};
/*
/*
* ALC889 Acer Aspire 8930G model
* ALC889 Acer Aspire 8930G model
*/
*/
...
@@ -1544,6 +1564,25 @@ static struct hda_input_mux alc888_2_capture_sources[2] = {
...
@@ -1544,6 +1564,25 @@ static struct hda_input_mux alc888_2_capture_sources[2] = {
}
}
};
};
static struct hda_input_mux alc888_acer_aspire_6530_sources[2] = {
/* Interal mic only available on one ADC */
{
.num_items = 3,
.items = {
{ "Ext Mic", 0x0 },
{ "CD", 0x4 },
{ "Int Mic", 0xb },
},
},
{
.num_items = 2,
.items = {
{ "Ext Mic", 0x0 },
{ "CD", 0x4 },
},
}
};
static struct hda_input_mux alc889_capture_sources[3] = {
static struct hda_input_mux alc889_capture_sources[3] = {
/* Digital mic only available on first "ADC" */
/* Digital mic only available on first "ADC" */
{
{
...
@@ -6347,7 +6386,7 @@ static struct hda_channel_mode alc882_sixstack_modes[2] = {
...
@@ -6347,7 +6386,7 @@ static struct hda_channel_mode alc882_sixstack_modes[2] = {
};
};
/*
/*
* macbook pro ALC885 can switch LineIn to LineOut without lo
o
sing Mic
* macbook pro ALC885 can switch LineIn to LineOut without losing Mic
*/
*/
/*
/*
...
@@ -7047,7 +7086,7 @@ static struct hda_verb alc882_auto_init_verbs[] = {
...
@@ -7047,7 +7086,7 @@ static struct hda_verb alc882_auto_init_verbs[] = {
#define alc882_loopbacks alc880_loopbacks
#define alc882_loopbacks alc880_loopbacks
#endif
#endif
/* pcm configuration: identi
a
cal with ALC880 */
/* pcm configuration: identical with ALC880 */
#define alc882_pcm_analog_playback alc880_pcm_analog_playback
#define alc882_pcm_analog_playback alc880_pcm_analog_playback
#define alc882_pcm_analog_capture alc880_pcm_analog_capture
#define alc882_pcm_analog_capture alc880_pcm_analog_capture
#define alc882_pcm_digital_playback alc880_pcm_digital_playback
#define alc882_pcm_digital_playback alc880_pcm_digital_playback
...
@@ -8068,7 +8107,7 @@ static struct snd_kcontrol_new alc883_fivestack_mixer[] = {
...
@@ -8068,7 +8107,7 @@ static struct snd_kcontrol_new alc883_fivestack_mixer[] = {
{ } /* end */
{ } /* end */
};
};
static struct snd_kcontrol_new alc883_ta
gr
a_mixer[] = {
static struct snd_kcontrol_new alc883_ta
rg
a_mixer[] = {
HDA_CODEC_VOLUME("Front Playback Volume", 0x0c, 0x0, HDA_OUTPUT),
HDA_CODEC_VOLUME("Front Playback Volume", 0x0c, 0x0, HDA_OUTPUT),
HDA_CODEC_MUTE("Headphone Playback Switch", 0x14, 0x0, HDA_OUTPUT),
HDA_CODEC_MUTE("Headphone Playback Switch", 0x14, 0x0, HDA_OUTPUT),
HDA_CODEC_MUTE("Front Playback Switch", 0x1b, 0x0, HDA_OUTPUT),
HDA_CODEC_MUTE("Front Playback Switch", 0x1b, 0x0, HDA_OUTPUT),
...
@@ -8088,7 +8127,7 @@ static struct snd_kcontrol_new alc883_tagra_mixer[] = {
...
@@ -8088,7 +8127,7 @@ static struct snd_kcontrol_new alc883_tagra_mixer[] = {
{ } /* end */
{ } /* end */
};
};
static struct snd_kcontrol_new alc883_ta
gr
a_2ch_mixer[] = {
static struct snd_kcontrol_new alc883_ta
rg
a_2ch_mixer[] = {
HDA_CODEC_VOLUME("Front Playback Volume", 0x0c, 0x0, HDA_OUTPUT),
HDA_CODEC_VOLUME("Front Playback Volume", 0x0c, 0x0, HDA_OUTPUT),
HDA_CODEC_MUTE("Headphone Playback Switch", 0x14, 0x0, HDA_OUTPUT),
HDA_CODEC_MUTE("Headphone Playback Switch", 0x14, 0x0, HDA_OUTPUT),
HDA_CODEC_MUTE("Front Playback Switch", 0x1b, 0x0, HDA_OUTPUT),
HDA_CODEC_MUTE("Front Playback Switch", 0x1b, 0x0, HDA_OUTPUT),
...
@@ -8153,6 +8192,19 @@ static struct snd_kcontrol_new alc883_acer_aspire_mixer[] = {
...
@@ -8153,6 +8192,19 @@ static struct snd_kcontrol_new alc883_acer_aspire_mixer[] = {
{ } /* end */
{ } /* end */
};
};
static struct snd_kcontrol_new alc888_acer_aspire_6530_mixer[] = {
HDA_CODEC_VOLUME("Front Playback Volume", 0x0c, 0x0, HDA_OUTPUT),
HDA_BIND_MUTE("Front Playback Switch", 0x0c, 2, HDA_INPUT),
HDA_CODEC_VOLUME("LFE Playback Volume", 0x0f, 0x0, HDA_OUTPUT),
HDA_BIND_MUTE("LFE Playback Switch", 0x0f, 2, HDA_INPUT),
HDA_CODEC_VOLUME("CD Playback Volume", 0x0b, 0x04, HDA_INPUT),
HDA_CODEC_MUTE("CD Playback Switch", 0x0b, 0x04, HDA_INPUT),
HDA_CODEC_VOLUME("Mic Playback Volume", 0x0b, 0x0, HDA_INPUT),
HDA_CODEC_VOLUME("Mic Boost", 0x18, 0, HDA_INPUT),
HDA_CODEC_MUTE("Mic Playback Switch", 0x0b, 0x0, HDA_INPUT),
{ } /* end */
};
static struct snd_kcontrol_new alc888_lenovo_sky_mixer[] = {
static struct snd_kcontrol_new alc888_lenovo_sky_mixer[] = {
HDA_CODEC_VOLUME("Front Playback Volume", 0x0c, 0x0, HDA_OUTPUT),
HDA_CODEC_VOLUME("Front Playback Volume", 0x0c, 0x0, HDA_OUTPUT),
HDA_BIND_MUTE("Front Playback Switch", 0x0c, 2, HDA_INPUT),
HDA_BIND_MUTE("Front Playback Switch", 0x0c, 2, HDA_INPUT),
...
@@ -8417,7 +8469,7 @@ static struct hda_verb alc883_2ch_fujitsu_pi2515_verbs[] = {
...
@@ -8417,7 +8469,7 @@ static struct hda_verb alc883_2ch_fujitsu_pi2515_verbs[] = {
{ } /* end */
{ } /* end */
};
};
static struct hda_verb alc883_ta
gr
a_verbs[] = {
static struct hda_verb alc883_ta
rg
a_verbs[] = {
{0x0c, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(0)},
{0x0c, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(0)},
{0x0c, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(1)},
{0x0c, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(1)},
...
@@ -8626,8 +8678,8 @@ static void alc883_medion_md2_init_hook(struct hda_codec *codec)
...
@@ -8626,8 +8678,8 @@ static void alc883_medion_md2_init_hook(struct hda_codec *codec)
}
}
/* toggle speaker-output according to the hp-jack state */
/* toggle speaker-output according to the hp-jack state */
#define alc883_ta
gr
a_init_hook alc882_targa_init_hook
#define alc883_ta
rg
a_init_hook alc882_targa_init_hook
#define alc883_ta
gr
a_unsol_event alc882_targa_unsol_event
#define alc883_ta
rg
a_unsol_event alc882_targa_unsol_event
static void alc883_clevo_m720_mic_automute(struct hda_codec *codec)
static void alc883_clevo_m720_mic_automute(struct hda_codec *codec)
{
{
...
@@ -8957,7 +9009,7 @@ static void alc889A_mb31_unsol_event(struct hda_codec *codec, unsigned int res)
...
@@ -8957,7 +9009,7 @@ static void alc889A_mb31_unsol_event(struct hda_codec *codec, unsigned int res)
#define alc883_loopbacks alc880_loopbacks
#define alc883_loopbacks alc880_loopbacks
#endif
#endif
/* pcm configuration: identi
a
cal with ALC880 */
/* pcm configuration: identical with ALC880 */
#define alc883_pcm_analog_playback alc880_pcm_analog_playback
#define alc883_pcm_analog_playback alc880_pcm_analog_playback
#define alc883_pcm_analog_capture alc880_pcm_analog_capture
#define alc883_pcm_analog_capture alc880_pcm_analog_capture
#define alc883_pcm_analog_alt_capture alc880_pcm_analog_alt_capture
#define alc883_pcm_analog_alt_capture alc880_pcm_analog_alt_capture
...
@@ -8978,6 +9030,7 @@ static const char *alc883_models[ALC883_MODEL_LAST] = {
...
@@ -8978,6 +9030,7 @@ static const char *alc883_models[ALC883_MODEL_LAST] = {
[ALC883_ACER] = "acer",
[ALC883_ACER] = "acer",
[ALC883_ACER_ASPIRE] = "acer-aspire",
[ALC883_ACER_ASPIRE] = "acer-aspire",
[ALC888_ACER_ASPIRE_4930G] = "acer-aspire-4930g",
[ALC888_ACER_ASPIRE_4930G] = "acer-aspire-4930g",
[ALC888_ACER_ASPIRE_6530G] = "acer-aspire-6530g",
[ALC888_ACER_ASPIRE_8930G] = "acer-aspire-8930g",
[ALC888_ACER_ASPIRE_8930G] = "acer-aspire-8930g",
[ALC883_MEDION] = "medion",
[ALC883_MEDION] = "medion",
[ALC883_MEDION_MD2] = "medion-md2",
[ALC883_MEDION_MD2] = "medion-md2",
...
@@ -9021,7 +9074,7 @@ static struct snd_pci_quirk alc883_cfg_tbl[] = {
...
@@ -9021,7 +9074,7 @@ static struct snd_pci_quirk alc883_cfg_tbl[] = {
SND_PCI_QUIRK(0x1025, 0x015e, "Acer Aspire 6930G",
SND_PCI_QUIRK(0x1025, 0x015e, "Acer Aspire 6930G",
ALC888_ACER_ASPIRE_4930G),
ALC888_ACER_ASPIRE_4930G),
SND_PCI_QUIRK(0x1025, 0x0166, "Acer Aspire 6530G",
SND_PCI_QUIRK(0x1025, 0x0166, "Acer Aspire 6530G",
ALC888_ACER_ASPIRE_
49
30G),
ALC888_ACER_ASPIRE_
65
30G),
/* default Acer -- disabled as it causes more problems.
/* default Acer -- disabled as it causes more problems.
* model=auto should work fine now
* model=auto should work fine now
*/
*/
...
@@ -9069,6 +9122,7 @@ static struct snd_pci_quirk alc883_cfg_tbl[] = {
...
@@ -9069,6 +9122,7 @@ static struct snd_pci_quirk alc883_cfg_tbl[] = {
SND_PCI_QUIRK(0x1462, 0x7267, "MSI", ALC883_3ST_6ch_DIG),
SND_PCI_QUIRK(0x1462, 0x7267, "MSI", ALC883_3ST_6ch_DIG),
SND_PCI_QUIRK(0x1462, 0x7280, "MSI", ALC883_6ST_DIG),
SND_PCI_QUIRK(0x1462, 0x7280, "MSI", ALC883_6ST_DIG),
SND_PCI_QUIRK(0x1462, 0x7327, "MSI", ALC883_6ST_DIG),
SND_PCI_QUIRK(0x1462, 0x7327, "MSI", ALC883_6ST_DIG),
SND_PCI_QUIRK(0x1462, 0x7350, "MSI", ALC883_6ST_DIG),
SND_PCI_QUIRK(0x1462, 0xa422, "MSI", ALC883_TARGA_2ch_DIG),
SND_PCI_QUIRK(0x1462, 0xa422, "MSI", ALC883_TARGA_2ch_DIG),
SND_PCI_QUIRK(0x147b, 0x1083, "Abit IP35-PRO", ALC883_6ST_DIG),
SND_PCI_QUIRK(0x147b, 0x1083, "Abit IP35-PRO", ALC883_6ST_DIG),
SND_PCI_QUIRK(0x1558, 0x0721, "Clevo laptop M720R", ALC883_CLEVO_M720),
SND_PCI_QUIRK(0x1558, 0x0721, "Clevo laptop M720R", ALC883_CLEVO_M720),
...
@@ -9165,8 +9219,8 @@ static struct alc_config_preset alc883_presets[] = {
...
@@ -9165,8 +9219,8 @@ static struct alc_config_preset alc883_presets[] = {
.input_mux = &alc883_capture_source,
.input_mux = &alc883_capture_source,
},
},
[ALC883_TARGA_DIG] = {
[ALC883_TARGA_DIG] = {
.mixers = { alc883_ta
gr
a_mixer, alc883_chmode_mixer },
.mixers = { alc883_ta
rg
a_mixer, alc883_chmode_mixer },
.init_verbs = { alc883_init_verbs, alc883_ta
gr
a_verbs},
.init_verbs = { alc883_init_verbs, alc883_ta
rg
a_verbs},
.num_dacs = ARRAY_SIZE(alc883_dac_nids),
.num_dacs = ARRAY_SIZE(alc883_dac_nids),
.dac_nids = alc883_dac_nids,
.dac_nids = alc883_dac_nids,
.dig_out_nid = ALC883_DIGOUT_NID,
.dig_out_nid = ALC883_DIGOUT_NID,
...
@@ -9174,12 +9228,12 @@ static struct alc_config_preset alc883_presets[] = {
...
@@ -9174,12 +9228,12 @@ static struct alc_config_preset alc883_presets[] = {
.channel_mode = alc883_3ST_6ch_modes,
.channel_mode = alc883_3ST_6ch_modes,
.need_dac_fix = 1,
.need_dac_fix = 1,
.input_mux = &alc883_capture_source,
.input_mux = &alc883_capture_source,
.unsol_event = alc883_ta
gr
a_unsol_event,
.unsol_event = alc883_ta
rg
a_unsol_event,
.init_hook = alc883_ta
gr
a_init_hook,
.init_hook = alc883_ta
rg
a_init_hook,
},
},
[ALC883_TARGA_2ch_DIG] = {
[ALC883_TARGA_2ch_DIG] = {
.mixers = { alc883_ta
gr
a_2ch_mixer},
.mixers = { alc883_ta
rg
a_2ch_mixer},
.init_verbs = { alc883_init_verbs, alc883_ta
gr
a_verbs},
.init_verbs = { alc883_init_verbs, alc883_ta
rg
a_verbs},
.num_dacs = ARRAY_SIZE(alc883_dac_nids),
.num_dacs = ARRAY_SIZE(alc883_dac_nids),
.dac_nids = alc883_dac_nids,
.dac_nids = alc883_dac_nids,
.adc_nids = alc883_adc_nids_alt,
.adc_nids = alc883_adc_nids_alt,
...
@@ -9188,13 +9242,13 @@ static struct alc_config_preset alc883_presets[] = {
...
@@ -9188,13 +9242,13 @@ static struct alc_config_preset alc883_presets[] = {
.num_channel_mode = ARRAY_SIZE(alc883_3ST_2ch_modes),
.num_channel_mode = ARRAY_SIZE(alc883_3ST_2ch_modes),
.channel_mode = alc883_3ST_2ch_modes,
.channel_mode = alc883_3ST_2ch_modes,
.input_mux = &alc883_capture_source,
.input_mux = &alc883_capture_source,
.unsol_event = alc883_ta
gr
a_unsol_event,
.unsol_event = alc883_ta
rg
a_unsol_event,
.init_hook = alc883_ta
gr
a_init_hook,
.init_hook = alc883_ta
rg
a_init_hook,
},
},
[ALC883_TARGA_8ch_DIG] = {
[ALC883_TARGA_8ch_DIG] = {
.mixers = { alc883_base_mixer, alc883_chmode_mixer },
.mixers = { alc883_base_mixer, alc883_chmode_mixer },
.init_verbs = { alc883_init_verbs, alc880_gpio3_init_verbs,
.init_verbs = { alc883_init_verbs, alc880_gpio3_init_verbs,
alc883_ta
gr
a_verbs },
alc883_ta
rg
a_verbs },
.num_dacs = ARRAY_SIZE(alc883_dac_nids),
.num_dacs = ARRAY_SIZE(alc883_dac_nids),
.dac_nids = alc883_dac_nids,
.dac_nids = alc883_dac_nids,
.num_adc_nids = ARRAY_SIZE(alc883_adc_nids_rev),
.num_adc_nids = ARRAY_SIZE(alc883_adc_nids_rev),
...
@@ -9206,8 +9260,8 @@ static struct alc_config_preset alc883_presets[] = {
...
@@ -9206,8 +9260,8 @@ static struct alc_config_preset alc883_presets[] = {
.channel_mode = alc883_4ST_8ch_modes,
.channel_mode = alc883_4ST_8ch_modes,
.need_dac_fix = 1,
.need_dac_fix = 1,
.input_mux = &alc883_capture_source,
.input_mux = &alc883_capture_source,
.unsol_event = alc883_ta
gr
a_unsol_event,
.unsol_event = alc883_ta
rg
a_unsol_event,
.init_hook = alc883_ta
gr
a_init_hook,
.init_hook = alc883_ta
rg
a_init_hook,
},
},
[ALC883_ACER] = {
[ALC883_ACER] = {
.mixers = { alc883_base_mixer },
.mixers = { alc883_base_mixer },
...
@@ -9255,6 +9309,24 @@ static struct alc_config_preset alc883_presets[] = {
...
@@ -9255,6 +9309,24 @@ static struct alc_config_preset alc883_presets[] = {
.unsol_event = alc_automute_amp_unsol_event,
.unsol_event = alc_automute_amp_unsol_event,
.init_hook = alc888_acer_aspire_4930g_init_hook,
.init_hook = alc888_acer_aspire_4930g_init_hook,
},
},
[ALC888_ACER_ASPIRE_6530G] = {
.mixers = { alc888_acer_aspire_6530_mixer },
.init_verbs = { alc883_init_verbs, alc880_gpio1_init_verbs,
alc888_acer_aspire_6530g_verbs },
.num_dacs = ARRAY_SIZE(alc883_dac_nids),
.dac_nids = alc883_dac_nids,
.num_adc_nids = ARRAY_SIZE(alc883_adc_nids_rev),
.adc_nids = alc883_adc_nids_rev,
.capsrc_nids = alc883_capsrc_nids_rev,
.dig_out_nid = ALC883_DIGOUT_NID,
.num_channel_mode = ARRAY_SIZE(alc883_3ST_2ch_modes),
.channel_mode = alc883_3ST_2ch_modes,
.num_mux_defs =
ARRAY_SIZE(alc888_2_capture_sources),
.input_mux = alc888_acer_aspire_6530_sources,
.unsol_event = alc_automute_amp_unsol_event,
.init_hook = alc888_acer_aspire_4930g_init_hook,
},
[ALC888_ACER_ASPIRE_8930G] = {
[ALC888_ACER_ASPIRE_8930G] = {
.mixers = { alc888_base_mixer,
.mixers = { alc888_base_mixer,
alc883_chmode_mixer },
alc883_chmode_mixer },
...
@@ -9361,7 +9433,7 @@ static struct alc_config_preset alc883_presets[] = {
...
@@ -9361,7 +9433,7 @@ static struct alc_config_preset alc883_presets[] = {
.init_hook = alc888_lenovo_ms7195_front_automute,
.init_hook = alc888_lenovo_ms7195_front_automute,
},
},
[ALC883_HAIER_W66] = {
[ALC883_HAIER_W66] = {
.mixers = { alc883_ta
gr
a_2ch_mixer},
.mixers = { alc883_ta
rg
a_2ch_mixer},
.init_verbs = { alc883_init_verbs, alc883_haier_w66_verbs},
.init_verbs = { alc883_init_verbs, alc883_haier_w66_verbs},
.num_dacs = ARRAY_SIZE(alc883_dac_nids),
.num_dacs = ARRAY_SIZE(alc883_dac_nids),
.dac_nids = alc883_dac_nids,
.dac_nids = alc883_dac_nids,
...
@@ -11131,7 +11203,7 @@ static struct hda_verb alc262_toshiba_rx1_unsol_verbs[] = {
...
@@ -11131,7 +11203,7 @@ static struct hda_verb alc262_toshiba_rx1_unsol_verbs[] = {
#define alc262_loopbacks alc880_loopbacks
#define alc262_loopbacks alc880_loopbacks
#endif
#endif
/* pcm configuration: identi
a
cal with ALC880 */
/* pcm configuration: identical with ALC880 */
#define alc262_pcm_analog_playback alc880_pcm_analog_playback
#define alc262_pcm_analog_playback alc880_pcm_analog_playback
#define alc262_pcm_analog_capture alc880_pcm_analog_capture
#define alc262_pcm_analog_capture alc880_pcm_analog_capture
#define alc262_pcm_digital_playback alc880_pcm_digital_playback
#define alc262_pcm_digital_playback alc880_pcm_digital_playback
...
@@ -12286,7 +12358,7 @@ static void alc268_auto_init_mono_speaker_out(struct hda_codec *codec)
...
@@ -12286,7 +12358,7 @@ static void alc268_auto_init_mono_speaker_out(struct hda_codec *codec)
AC_VERB_SET_AMP_GAIN_MUTE, dac_vol2);
AC_VERB_SET_AMP_GAIN_MUTE, dac_vol2);
}
}
/* pcm configuration: identi
a
cal with ALC880 */
/* pcm configuration: identical with ALC880 */
#define alc268_pcm_analog_playback alc880_pcm_analog_playback
#define alc268_pcm_analog_playback alc880_pcm_analog_playback
#define alc268_pcm_analog_capture alc880_pcm_analog_capture
#define alc268_pcm_analog_capture alc880_pcm_analog_capture
#define alc268_pcm_analog_alt_capture alc880_pcm_analog_alt_capture
#define alc268_pcm_analog_alt_capture alc880_pcm_analog_alt_capture
...
@@ -13197,7 +13269,7 @@ static int alc269_auto_create_analog_input_ctls(struct alc_spec *spec,
...
@@ -13197,7 +13269,7 @@ static int alc269_auto_create_analog_input_ctls(struct alc_spec *spec,
#define alc269_loopbacks alc880_loopbacks
#define alc269_loopbacks alc880_loopbacks
#endif
#endif
/* pcm configuration: identi
a
cal with ALC880 */
/* pcm configuration: identical with ALC880 */
#define alc269_pcm_analog_playback alc880_pcm_analog_playback
#define alc269_pcm_analog_playback alc880_pcm_analog_playback
#define alc269_pcm_analog_capture alc880_pcm_analog_capture
#define alc269_pcm_analog_capture alc880_pcm_analog_capture
#define alc269_pcm_digital_playback alc880_pcm_digital_playback
#define alc269_pcm_digital_playback alc880_pcm_digital_playback
...
@@ -14059,7 +14131,7 @@ static void alc861_toshiba_unsol_event(struct hda_codec *codec,
...
@@ -14059,7 +14131,7 @@ static void alc861_toshiba_unsol_event(struct hda_codec *codec,
alc861_toshiba_automute(codec);
alc861_toshiba_automute(codec);
}
}
/* pcm configuration: identi
a
cal with ALC880 */
/* pcm configuration: identical with ALC880 */
#define alc861_pcm_analog_playback alc880_pcm_analog_playback
#define alc861_pcm_analog_playback alc880_pcm_analog_playback
#define alc861_pcm_analog_capture alc880_pcm_analog_capture
#define alc861_pcm_analog_capture alc880_pcm_analog_capture
#define alc861_pcm_digital_playback alc880_pcm_digital_playback
#define alc861_pcm_digital_playback alc880_pcm_digital_playback
...
@@ -14582,7 +14654,7 @@ static hda_nid_t alc861vd_dac_nids[4] = {
...
@@ -14582,7 +14654,7 @@ static hda_nid_t alc861vd_dac_nids[4] = {
/* dac_nids for ALC660vd are in a different order - according to
/* dac_nids for ALC660vd are in a different order - according to
* Realtek's driver.
* Realtek's driver.
* This should probably
t
esult in a different mixer for 6stack models
* This should probably
r
esult in a different mixer for 6stack models
* of ALC660vd codecs, but for now there is only 3stack mixer
* of ALC660vd codecs, but for now there is only 3stack mixer
* - and it is the same as in 861vd.
* - and it is the same as in 861vd.
* adc_nids in ALC660vd are (is) the same as in 861vd
* adc_nids in ALC660vd are (is) the same as in 861vd
...
@@ -15027,7 +15099,7 @@ static void alc861vd_dallas_init_hook(struct hda_codec *codec)
...
@@ -15027,7 +15099,7 @@ static void alc861vd_dallas_init_hook(struct hda_codec *codec)
#define alc861vd_loopbacks alc880_loopbacks
#define alc861vd_loopbacks alc880_loopbacks
#endif
#endif
/* pcm configuration: identi
a
cal with ALC880 */
/* pcm configuration: identical with ALC880 */
#define alc861vd_pcm_analog_playback alc880_pcm_analog_playback
#define alc861vd_pcm_analog_playback alc880_pcm_analog_playback
#define alc861vd_pcm_analog_capture alc880_pcm_analog_capture
#define alc861vd_pcm_analog_capture alc880_pcm_analog_capture
#define alc861vd_pcm_digital_playback alc880_pcm_digital_playback
#define alc861vd_pcm_digital_playback alc880_pcm_digital_playback
...
@@ -16669,7 +16741,7 @@ static struct snd_kcontrol_new alc272_nc10_mixer[] = {
...
@@ -16669,7 +16741,7 @@ static struct snd_kcontrol_new alc272_nc10_mixer[] = {
#endif
#endif
/* pcm configuration: identi
a
cal with ALC880 */
/* pcm configuration: identical with ALC880 */
#define alc662_pcm_analog_playback alc880_pcm_analog_playback
#define alc662_pcm_analog_playback alc880_pcm_analog_playback
#define alc662_pcm_analog_capture alc880_pcm_analog_capture
#define alc662_pcm_analog_capture alc880_pcm_analog_capture
#define alc662_pcm_digital_playback alc880_pcm_digital_playback
#define alc662_pcm_digital_playback alc880_pcm_digital_playback
...
...
sound/soc/txx9/txx9aclc.c
浏览文件 @
b7f797cb
...
@@ -297,9 +297,9 @@ static int txx9aclc_pcm_new(struct snd_card *card, struct snd_soc_dai *dai,
...
@@ -297,9 +297,9 @@ static int txx9aclc_pcm_new(struct snd_card *card, struct snd_soc_dai *dai,
static
bool
filter
(
struct
dma_chan
*
chan
,
void
*
param
)
static
bool
filter
(
struct
dma_chan
*
chan
,
void
*
param
)
{
{
struct
txx9aclc_dmadata
*
dmadata
=
param
;
struct
txx9aclc_dmadata
*
dmadata
=
param
;
char
devname
[
BUS_ID_SIZE
+
2
];
char
devname
[
20
+
2
];
/* FIXME: old BUS_ID_SIZE + 2 */
s
printf
(
devname
,
"%s.%d"
,
dmadata
->
dma_res
->
name
,
s
nprintf
(
devname
,
sizeof
(
devname
)
,
"%s.%d"
,
dmadata
->
dma_res
->
name
,
(
int
)
dmadata
->
dma_res
->
start
);
(
int
)
dmadata
->
dma_res
->
start
);
if
(
strcmp
(
dev_name
(
chan
->
device
->
dev
),
devname
)
==
0
)
{
if
(
strcmp
(
dev_name
(
chan
->
device
->
dev
),
devname
)
==
0
)
{
chan
->
private
=
&
dmadata
->
dma_slave
;
chan
->
private
=
&
dmadata
->
dma_slave
;
...
...
sound/usb/caiaq/audio.c
浏览文件 @
b7f797cb
...
@@ -199,8 +199,9 @@ static int snd_usb_caiaq_pcm_prepare(struct snd_pcm_substream *substream)
...
@@ -199,8 +199,9 @@ static int snd_usb_caiaq_pcm_prepare(struct snd_pcm_substream *substream)
dev
->
period_out_count
[
index
]
=
BYTES_PER_SAMPLE
+
1
;
dev
->
period_out_count
[
index
]
=
BYTES_PER_SAMPLE
+
1
;
dev
->
audio_out_buf_pos
[
index
]
=
BYTES_PER_SAMPLE
+
1
;
dev
->
audio_out_buf_pos
[
index
]
=
BYTES_PER_SAMPLE
+
1
;
}
else
{
}
else
{
dev
->
period_in_count
[
index
]
=
BYTES_PER_SAMPLE
;
int
in_pos
=
(
dev
->
spec
.
data_alignment
==
2
)
?
0
:
2
;
dev
->
audio_in_buf_pos
[
index
]
=
BYTES_PER_SAMPLE
;
dev
->
period_in_count
[
index
]
=
BYTES_PER_SAMPLE
+
in_pos
;
dev
->
audio_in_buf_pos
[
index
]
=
BYTES_PER_SAMPLE
+
in_pos
;
}
}
if
(
dev
->
streaming
)
if
(
dev
->
streaming
)
...
...
sound/usb/caiaq/device.c
浏览文件 @
b7f797cb
...
@@ -35,7 +35,7 @@
...
@@ -35,7 +35,7 @@
#include "input.h"
#include "input.h"
MODULE_AUTHOR
(
"Daniel Mack <daniel@caiaq.de>"
);
MODULE_AUTHOR
(
"Daniel Mack <daniel@caiaq.de>"
);
MODULE_DESCRIPTION
(
"caiaq USB audio, version 1.3.1
6
"
);
MODULE_DESCRIPTION
(
"caiaq USB audio, version 1.3.1
7
"
);
MODULE_LICENSE
(
"GPL"
);
MODULE_LICENSE
(
"GPL"
);
MODULE_SUPPORTED_DEVICE
(
"{{Native Instruments, RigKontrol2},"
MODULE_SUPPORTED_DEVICE
(
"{{Native Instruments, RigKontrol2},"
"{Native Instruments, RigKontrol3},"
"{Native Instruments, RigKontrol3},"
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录