提交 28b4d5cc 编写于 作者: D David S. Miller

Merge branch 'master' of /home/davem/src/GIT/linux-2.6/

Conflicts:
	drivers/net/pcmcia/fmvj18x_cs.c
	drivers/net/pcmcia/nmclan_cs.c
	drivers/net/pcmcia/xirc2ps_cs.c
	drivers/net/wireless/ray_cs.c
CONFIG_RCU_TRACE debugfs Files and Formats
The rcupreempt and rcutree implementations of RCU provide debugfs trace
output that summarizes counters and state. This information is useful for
debugging RCU itself, and can sometimes also help to debug abuses of RCU.
Note that the rcuclassic implementation of RCU does not provide debugfs
trace output.
The following sections describe the debugfs files and formats for
preemptable RCU (rcupreempt) and hierarchical RCU (rcutree).
Preemptable RCU debugfs Files and Formats
This implementation of RCU provides three debugfs files under the
top-level directory RCU: rcu/rcuctrs (which displays the per-CPU
counters used by preemptable RCU) rcu/rcugp (which displays grace-period
counters), and rcu/rcustats (which internal counters for debugging RCU).
The output of "cat rcu/rcuctrs" looks as follows:
CPU last cur F M
0 5 -5 0 0
1 -1 0 0 0
2 0 1 0 0
3 0 1 0 0
4 0 1 0 0
5 0 1 0 0
6 0 2 0 0
7 0 -1 0 0
8 0 1 0 0
ggp = 26226, state = waitzero
The per-CPU fields are as follows:
o "CPU" gives the CPU number. Offline CPUs are not displayed.
o "last" gives the value of the counter that is being decremented
for the current grace period phase. In the example above,
the counters sum to 4, indicating that there are still four
RCU read-side critical sections still running that started
before the last counter flip.
o "cur" gives the value of the counter that is currently being
both incremented (by rcu_read_lock()) and decremented (by
rcu_read_unlock()). In the example above, the counters sum to
1, indicating that there is only one RCU read-side critical section
still running that started after the last counter flip.
o "F" indicates whether RCU is waiting for this CPU to acknowledge
a counter flip. In the above example, RCU is not waiting on any,
which is consistent with the state being "waitzero" rather than
"waitack".
o "M" indicates whether RCU is waiting for this CPU to execute a
memory barrier. In the above example, RCU is not waiting on any,
which is consistent with the state being "waitzero" rather than
"waitmb".
o "ggp" is the global grace-period counter.
o "state" is the RCU state, which can be one of the following:
o "idle": there is no grace period in progress.
o "waitack": RCU just incremented the global grace-period
counter, which has the effect of reversing the roles of
the "last" and "cur" counters above, and is waiting for
all the CPUs to acknowledge the flip. Once the flip has
been acknowledged, CPUs will no longer be incrementing
what are now the "last" counters, so that their sum will
decrease monotonically down to zero.
o "waitzero": RCU is waiting for the sum of the "last" counters
to decrease to zero.
o "waitmb": RCU is waiting for each CPU to execute a memory
barrier, which ensures that instructions from a given CPU's
last RCU read-side critical section cannot be reordered
with instructions following the memory-barrier instruction.
The output of "cat rcu/rcugp" looks as follows:
oldggp=48870 newggp=48873
Note that reading from this file provokes a synchronize_rcu(). The
"oldggp" value is that of "ggp" from rcu/rcuctrs above, taken before
executing the synchronize_rcu(), and the "newggp" value is also the
"ggp" value, but taken after the synchronize_rcu() command returns.
The output of "cat rcu/rcugp" looks as follows:
na=1337955 nl=40 wa=1337915 wl=44 da=1337871 dl=0 dr=1337871 di=1337871
1=50989 e1=6138 i1=49722 ie1=82 g1=49640 a1=315203 ae1=265563 a2=49640
z1=1401244 ze1=1351605 z2=49639 m1=5661253 me1=5611614 m2=49639
These are counters tracking internal preemptable-RCU events, however,
some of them may be useful for debugging algorithms using RCU. In
particular, the "nl", "wl", and "dl" values track the number of RCU
callbacks in various states. The fields are as follows:
o "na" is the total number of RCU callbacks that have been enqueued
since boot.
o "nl" is the number of RCU callbacks waiting for the previous
grace period to end so that they can start waiting on the next
grace period.
o "wa" is the total number of RCU callbacks that have started waiting
for a grace period since boot. "na" should be roughly equal to
"nl" plus "wa".
o "wl" is the number of RCU callbacks currently waiting for their
grace period to end.
o "da" is the total number of RCU callbacks whose grace periods
have completed since boot. "wa" should be roughly equal to
"wl" plus "da".
o "dr" is the total number of RCU callbacks that have been removed
from the list of callbacks ready to invoke. "dr" should be roughly
equal to "da".
o "di" is the total number of RCU callbacks that have been invoked
since boot. "di" should be roughly equal to "da", though some
early versions of preemptable RCU had a bug so that only the
last CPU's count of invocations was displayed, rather than the
sum of all CPU's counts.
o "1" is the number of calls to rcu_try_flip(). This should be
roughly equal to the sum of "e1", "i1", "a1", "z1", and "m1"
described below. In other words, the number of times that
the state machine is visited should be equal to the sum of the
number of times that each state is visited plus the number of
times that the state-machine lock acquisition failed.
o "e1" is the number of times that rcu_try_flip() was unable to
acquire the fliplock.
o "i1" is the number of calls to rcu_try_flip_idle().
o "ie1" is the number of times rcu_try_flip_idle() exited early
due to the calling CPU having no work for RCU.
o "g1" is the number of times that rcu_try_flip_idle() decided
to start a new grace period. "i1" should be roughly equal to
"ie1" plus "g1".
o "a1" is the number of calls to rcu_try_flip_waitack().
o "ae1" is the number of times that rcu_try_flip_waitack() found
that at least one CPU had not yet acknowledge the new grace period
(AKA "counter flip").
o "a2" is the number of time rcu_try_flip_waitack() found that
all CPUs had acknowledged. "a1" should be roughly equal to
"ae1" plus "a2". (This particular output was collected on
a 128-CPU machine, hence the smaller-than-usual fraction of
calls to rcu_try_flip_waitack() finding all CPUs having already
acknowledged.)
o "z1" is the number of calls to rcu_try_flip_waitzero().
o "ze1" is the number of times that rcu_try_flip_waitzero() found
that not all of the old RCU read-side critical sections had
completed.
o "z2" is the number of times that rcu_try_flip_waitzero() finds
the sum of the counters equal to zero, in other words, that
all of the old RCU read-side critical sections had completed.
The value of "z1" should be roughly equal to "ze1" plus
"z2".
o "m1" is the number of calls to rcu_try_flip_waitmb().
o "me1" is the number of times that rcu_try_flip_waitmb() finds
that at least one CPU has not yet executed a memory barrier.
o "m2" is the number of times that rcu_try_flip_waitmb() finds that
all CPUs have executed a memory barrier.
The rcutree implementation of RCU provides debugfs trace output that
summarizes counters and state. This information is useful for debugging
RCU itself, and can sometimes also help to debug abuses of RCU.
The following sections describe the debugfs files and formats.
Hierarchical RCU debugfs Files and Formats
......@@ -210,9 +35,10 @@ rcu_bh:
6 c=-275 g=-275 pq=1 pqc=-275 qp=0 dt=859/1 dn=0 df=15 of=0 ri=0 ql=0 b=10
7 c=-275 g=-275 pq=1 pqc=-275 qp=0 dt=3761/1 dn=0 df=15 of=0 ri=0 ql=0 b=10
The first section lists the rcu_data structures for rcu, the second for
rcu_bh. Each section has one line per CPU, or eight for this 8-CPU system.
The fields are as follows:
The first section lists the rcu_data structures for rcu_sched, the second
for rcu_bh. Note that CONFIG_TREE_PREEMPT_RCU kernels will have an
additional section for rcu_preempt. Each section has one line per CPU,
or eight for this 8-CPU system. The fields are as follows:
o The number at the beginning of each line is the CPU number.
CPUs numbers followed by an exclamation mark are offline,
......@@ -223,9 +49,9 @@ o The number at the beginning of each line is the CPU number.
o "c" is the count of grace periods that this CPU believes have
completed. CPUs in dynticks idle mode may lag quite a ways
behind, for example, CPU 4 under "rcu" above, which has slept
through the past 25 RCU grace periods. It is not unusual to
see CPUs lagging by thousands of grace periods.
behind, for example, CPU 4 under "rcu_sched" above, which has
slept through the past 25 RCU grace periods. It is not unusual
to see CPUs lagging by thousands of grace periods.
o "g" is the count of grace periods that this CPU believes have
started. Again, CPUs in dynticks idle mode may lag behind.
......@@ -308,8 +134,10 @@ The output of "cat rcu/rcugp" looks as follows:
rcu_sched: completed=33062 gpnum=33063
rcu_bh: completed=464 gpnum=464
Again, this output is for both "rcu" and "rcu_bh". The fields are
taken from the rcu_state structure, and are as follows:
Again, this output is for both "rcu_sched" and "rcu_bh". Note that
kernels built with CONFIG_TREE_PREEMPT_RCU will have an additional
"rcu_preempt" line. The fields are taken from the rcu_state structure,
and are as follows:
o "completed" is the number of grace periods that have completed.
It is comparable to the "c" field from rcu/rcudata in that a
......@@ -324,23 +152,24 @@ o "gpnum" is the number of grace periods that have started. It is
If these two fields are equal (as they are for "rcu_bh" above),
then there is no grace period in progress, in other words, RCU
is idle. On the other hand, if the two fields differ (as they
do for "rcu" above), then an RCU grace period is in progress.
do for "rcu_sched" above), then an RCU grace period is in progress.
The output of "cat rcu/rcuhier" looks as follows, with very long lines:
c=6902 g=6903 s=2 jfq=3 j=72c7 nfqs=13142/nfqsng=0(13142) fqlh=6
1/1 0:127 ^0
3/3 0:35 ^0 0/0 36:71 ^1 0/0 72:107 ^2 0/0 108:127 ^3
3/3f 0:5 ^0 2/3 6:11 ^1 0/0 12:17 ^2 0/0 18:23 ^3 0/0 24:29 ^4 0/0 30:35 ^5 0/0 36:41 ^0 0/0 42:47 ^1 0/0 48:53 ^2 0/0 54:59 ^3 0/0 60:65 ^4 0/0 66:71 ^5 0/0 72:77 ^0 0/0 78:83 ^1 0/0 84:89 ^2 0/0 90:95 ^3 0/0 96:101 ^4 0/0 102:107 ^5 0/0 108:113 ^0 0/0 114:119 ^1 0/0 120:125 ^2 0/0 126:127 ^3
c=6902 g=6903 s=2 jfq=3 j=72c7 nfqs=13142/nfqsng=0(13142) fqlh=6 oqlen=0
1/1 .>. 0:127 ^0
3/3 .>. 0:35 ^0 0/0 .>. 36:71 ^1 0/0 .>. 72:107 ^2 0/0 .>. 108:127 ^3
3/3f .>. 0:5 ^0 2/3 .>. 6:11 ^1 0/0 .>. 12:17 ^2 0/0 .>. 18:23 ^3 0/0 .>. 24:29 ^4 0/0 .>. 30:35 ^5 0/0 .>. 36:41 ^0 0/0 .>. 42:47 ^1 0/0 .>. 48:53 ^2 0/0 .>. 54:59 ^3 0/0 .>. 60:65 ^4 0/0 .>. 66:71 ^5 0/0 .>. 72:77 ^0 0/0 .>. 78:83 ^1 0/0 .>. 84:89 ^2 0/0 .>. 90:95 ^3 0/0 .>. 96:101 ^4 0/0 .>. 102:107 ^5 0/0 .>. 108:113 ^0 0/0 .>. 114:119 ^1 0/0 .>. 120:125 ^2 0/0 .>. 126:127 ^3
rcu_bh:
c=-226 g=-226 s=1 jfq=-5701 j=72c7 nfqs=88/nfqsng=0(88) fqlh=0
0/1 0:127 ^0
0/3 0:35 ^0 0/0 36:71 ^1 0/0 72:107 ^2 0/0 108:127 ^3
0/3f 0:5 ^0 0/3 6:11 ^1 0/0 12:17 ^2 0/0 18:23 ^3 0/0 24:29 ^4 0/0 30:35 ^5 0/0 36:41 ^0 0/0 42:47 ^1 0/0 48:53 ^2 0/0 54:59 ^3 0/0 60:65 ^4 0/0 66:71 ^5 0/0 72:77 ^0 0/0 78:83 ^1 0/0 84:89 ^2 0/0 90:95 ^3 0/0 96:101 ^4 0/0 102:107 ^5 0/0 108:113 ^0 0/0 114:119 ^1 0/0 120:125 ^2 0/0 126:127 ^3
c=-226 g=-226 s=1 jfq=-5701 j=72c7 nfqs=88/nfqsng=0(88) fqlh=0 oqlen=0
0/1 .>. 0:127 ^0
0/3 .>. 0:35 ^0 0/0 .>. 36:71 ^1 0/0 .>. 72:107 ^2 0/0 .>. 108:127 ^3
0/3f .>. 0:5 ^0 0/3 .>. 6:11 ^1 0/0 .>. 12:17 ^2 0/0 .>. 18:23 ^3 0/0 .>. 24:29 ^4 0/0 .>. 30:35 ^5 0/0 .>. 36:41 ^0 0/0 .>. 42:47 ^1 0/0 .>. 48:53 ^2 0/0 .>. 54:59 ^3 0/0 .>. 60:65 ^4 0/0 .>. 66:71 ^5 0/0 .>. 72:77 ^0 0/0 .>. 78:83 ^1 0/0 .>. 84:89 ^2 0/0 .>. 90:95 ^3 0/0 .>. 96:101 ^4 0/0 .>. 102:107 ^5 0/0 .>. 108:113 ^0 0/0 .>. 114:119 ^1 0/0 .>. 120:125 ^2 0/0 .>. 126:127 ^3
This is once again split into "rcu" and "rcu_bh" portions. The fields are
as follows:
This is once again split into "rcu_sched" and "rcu_bh" portions,
and CONFIG_TREE_PREEMPT_RCU kernels will again have an additional
"rcu_preempt" section. The fields are as follows:
o "c" is exactly the same as "completed" under rcu/rcugp.
......@@ -372,6 +201,11 @@ o "fqlh" is the number of calls to force_quiescent_state() that
exited immediately (without even being counted in nfqs above)
due to contention on ->fqslock.
o "oqlen" is the number of callbacks on the "orphan" callback
list. RCU callbacks are placed on this list by CPUs going
offline, and are "adopted" either by the CPU helping the outgoing
CPU or by the next rcu_barrier*() call, whichever comes first.
o Each element of the form "1/1 0:127 ^0" represents one struct
rcu_node. Each line represents one level of the hierarchy, from
root to leaves. It is best to think of the rcu_data structures
......@@ -379,7 +213,7 @@ o Each element of the form "1/1 0:127 ^0" represents one struct
might be either one, two, or three levels of rcu_node structures,
depending on the relationship between CONFIG_RCU_FANOUT and
CONFIG_NR_CPUS.
o The numbers separated by the "/" are the qsmask followed
by the qsmaskinit. The qsmask will have one bit
set for each entity in the next lower level that
......@@ -389,10 +223,19 @@ o Each element of the form "1/1 0:127 ^0" represents one struct
The value of qsmaskinit is assigned to that of qsmask
at the beginning of each grace period.
For example, for "rcu", the qsmask of the first entry
of the lowest level is 0x14, meaning that we are still
waiting for CPUs 2 and 4 to check in for the current
grace period.
For example, for "rcu_sched", the qsmask of the first
entry of the lowest level is 0x14, meaning that we
are still waiting for CPUs 2 and 4 to check in for the
current grace period.
o The characters separated by the ">" indicate the state
of the blocked-tasks lists. A "T" preceding the ">"
indicates that at least one task blocked in an RCU
read-side critical section blocks the current grace
period, while a "." preceding the ">" indicates otherwise.
The character following the ">" indicates similarly for
the next grace period. A "T" should appear in this
field only for rcu-preempt.
o The numbers separated by the ":" are the range of CPUs
served by this struct rcu_node. This can be helpful
......@@ -431,8 +274,9 @@ rcu_bh:
6 np=120834 qsp=9902 cbr=0 cng=0 gpc=6 gps=3 nf=2 nn=110921
7 np=144888 qsp=26336 cbr=0 cng=0 gpc=8 gps=2 nf=0 nn=118542
As always, this is once again split into "rcu" and "rcu_bh" portions.
The fields are as follows:
As always, this is once again split into "rcu_sched" and "rcu_bh"
portions, with CONFIG_TREE_PREEMPT_RCU kernels having an additional
"rcu_preempt" section. The fields are as follows:
o "np" is the number of times that __rcu_pending() has been invoked
for the corresponding flavor of RCU.
......
......@@ -830,7 +830,7 @@ sched: Critical sections Grace period Barrier
SRCU: Critical sections Grace period Barrier
srcu_read_lock synchronize_srcu N/A
srcu_read_unlock
srcu_read_unlock synchronize_srcu_expedited
SRCU: Initialization/cleanup
init_srcu_struct
......
......@@ -65,6 +65,7 @@ aicdb.h*
asm-offsets.h
asm_offsets.h
autoconf.h*
av_permissions.h
bbootsect
bin2c
binkernel.spec
......@@ -95,12 +96,14 @@ docproc
elf2ecoff
elfconfig.h*
fixdep
flask.h
fore200e_mkfirm
fore200e_pca_fw.c*
gconf
gen-devlist
gen_crc32table
gen_init_cpio
genheaders
genksyms
*_gray256.c
ihex2fw
......
......@@ -85,7 +85,6 @@ parameter is applicable:
PPT Parallel port support is enabled.
PS2 Appropriate PS/2 support is enabled.
RAM RAM disk support is enabled.
ROOTPLUG The example Root Plug LSM is enabled.
S390 S390 architecture is enabled.
SCSI Appropriate SCSI support is enabled.
A lot of drivers has their options described inside of
......@@ -779,6 +778,13 @@ and is between 256 and 4096 characters. It is defined in the file
by the set_ftrace_notrace file in the debugfs
tracing directory.
ftrace_graph_filter=[function-list]
[FTRACE] Limit the top level callers functions traced
by the function graph tracer at boot up.
function-list is a comma separated list of functions
that can be changed at run time by the
set_graph_function file in the debugfs tracing directory.
gamecon.map[2|3]=
[HW,JOY] Multisystem joystick and NES/SNES/PSX pad
support via parallel port (up to 5 devices per port)
......@@ -2032,8 +2038,15 @@ and is between 256 and 4096 characters. It is defined in the file
print-fatal-signals=
[KNL] debug: print fatal signals
print-fatal-signals=1: print segfault info to
the kernel console.
If enabled, warn about various signal handling
related application anomalies: too many signals,
too many POSIX.1 timers, fatal signals causing a
coredump - etc.
If you hit the warning due to signal overflow,
you might want to try "ulimit -i unlimited".
default: off.
printk.time= Show timing data prefixed to each printk message line
......@@ -2164,15 +2177,6 @@ and is between 256 and 4096 characters. It is defined in the file
Useful for devices that are detected asynchronously
(e.g. USB and MMC devices).
root_plug.vendor_id=
[ROOTPLUG] Override the default vendor ID
root_plug.product_id=
[ROOTPLUG] Override the default product ID
root_plug.debug=
[ROOTPLUG] Enable debugging output
rw [KNL] Mount root device read-write on boot
S [KNL] Run init in single mode
......
This file details changes in 2.6 which affect PCMCIA card driver authors:
* no cs_error / CS_CHECK / CONFIG_PCMCIA_DEBUG (as of 2.6.33)
Instead of the cs_error() callback or the CS_CHECK() macro, please use
Linux-style checking of return values, and -- if necessary -- debug
messages using "dev_dbg()" or "pr_debug()".
* New CIS tuple access (as of 2.6.33)
Instead of pcmcia_get_{first,next}_tuple(), pcmcia_get_tuple_data() and
pcmcia_parse_tuple(), a driver shall use "pcmcia_get_tuple()" if it is
only interested in one (raw) tuple, or "pcmcia_loop_tuple()" if it is
interested in all tuples of one type. To decode the MAC from CISTPL_FUNCE,
a new helper "pcmcia_get_mac_from_cis()" was added.
* New configuration loop helper (as of 2.6.28)
By calling pcmcia_loop_config(), a driver can iterate over all available
configuration options. During a driver's probe() phase, one doesn't need
......
......@@ -213,10 +213,19 @@ If you can't trace NMI functions, then skip this option.
<details to be filled>
HAVE_FTRACE_SYSCALLS
HAVE_SYSCALL_TRACEPOINTS
---------------------
<details to be filled>
You need very few things to get the syscalls tracing in an arch.
- Have a NR_syscalls variable in <asm/unistd.h> that provides the number
of syscalls supported by the arch.
- Implement arch_syscall_addr() that resolves a syscall address from a
syscall number.
- Support the TIF_SYSCALL_TRACEPOINT thread flags
- Put the trace_sys_enter() and trace_sys_exit() tracepoints calls from ptrace
in the ptrace syscalls tracing path.
- Tag this arch as HAVE_SYSCALL_TRACEPOINTS.
HAVE_FTRACE_MCOUNT_RECORD
......
......@@ -3023,11 +3023,8 @@ S: Maintained
F: fs/autofs4/
KERNEL BUILD
M: Sam Ravnborg <sam@ravnborg.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sam/kbuild-next.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sam/kbuild-fixes.git
L: linux-kbuild@vger.kernel.org
S: Maintained
S: Orphan
F: Documentation/kbuild/
F: Makefile
F: scripts/Makefile.*
......
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 32
EXTRAVERSION = -rc8
EXTRAVERSION =
NAME = Man-Eating Seals of Antiquity
# *DOCUMENTATION*
......@@ -379,6 +379,7 @@ export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn --exc
PHONY += scripts_basic
scripts_basic:
$(Q)$(MAKE) $(build)=scripts/basic
$(Q)rm -f .tmp_quiet_recordmcount
# To avoid any implicit rule to kick in, define an empty command.
scripts/basic/%: scripts_basic ;
......
......@@ -52,7 +52,7 @@
#define BUG() \
do { \
_BUG_OR_WARN(0); \
for (;;); \
unreachable(); \
} while (0)
#define WARN_ON(condition) \
......
......@@ -4,8 +4,6 @@
#include <linux/dma-mapping.h>
#include <linux/swiotlb.h>
extern int swiotlb_force;
#ifdef CONFIG_SWIOTLB
extern int swiotlb;
extern void pci_swiotlb_init(void);
......
......@@ -41,7 +41,7 @@ struct dma_map_ops swiotlb_dma_ops = {
void __init swiotlb_dma_init(void)
{
dma_ops = &swiotlb_dma_ops;
swiotlb_init();
swiotlb_init(1);
}
void __init pci_swiotlb_init(void)
......@@ -51,7 +51,7 @@ void __init pci_swiotlb_init(void)
swiotlb = 1;
printk(KERN_INFO "PCI-DMA: Re-initialize machine vector.\n");
machvec_init("dig");
swiotlb_init();
swiotlb_init(1);
dma_ops = &swiotlb_dma_ops;
#else
panic("Unable to find Intel IOMMU");
......
......@@ -11,9 +11,7 @@
static inline void __noreturn BUG(void)
{
__asm__ __volatile__("break %0" : : "i" (BRK_BUG));
/* Fool GCC into thinking the function doesn't return. */
while (1)
;
unreachable();
}
#define HAVE_ARCH_BUG
......
......@@ -306,6 +306,7 @@ static inline int mips_atomic_set(struct pt_regs *regs,
if (cpu_has_llsc && R10000_LLSC_WAR) {
__asm__ __volatile__ (
" .set mips3 \n"
" li %[err], 0 \n"
"1: ll %[old], (%[addr]) \n"
" move %[tmp], %[new] \n"
......@@ -320,6 +321,7 @@ static inline int mips_atomic_set(struct pt_regs *regs,
" "STR(PTR)" 1b, 4b \n"
" "STR(PTR)" 2b, 4b \n"
" .previous \n"
" .set mips0 \n"
: [old] "=&r" (old),
[err] "=&r" (err),
[tmp] "=&r" (tmp)
......@@ -329,6 +331,7 @@ static inline int mips_atomic_set(struct pt_regs *regs,
: "memory");
} else if (cpu_has_llsc) {
__asm__ __volatile__ (
" .set mips3 \n"
" li %[err], 0 \n"
"1: ll %[old], (%[addr]) \n"
" move %[tmp], %[new] \n"
......@@ -347,6 +350,7 @@ static inline int mips_atomic_set(struct pt_regs *regs,
" "STR(PTR)" 1b, 5b \n"
" "STR(PTR)" 2b, 5b \n"
" .previous \n"
" .set mips0 \n"
: [old] "=&r" (old),
[err] "=&r" (err),
[tmp] "=&r" (tmp)
......
......@@ -110,7 +110,6 @@ static struct korina_device korina_dev0_data = {
static struct platform_device korina_dev0 = {
.id = -1,
.name = "korina",
.dev.driver_data = &korina_dev0_data,
.resource = korina_dev0_res,
.num_resources = ARRAY_SIZE(korina_dev0_res),
};
......@@ -332,6 +331,8 @@ static int __init plat_setup_devices(void)
/* set the uart clock to the current cpu frequency */
rb532_uart_res[0].uartclk = idt_cpu_freq;
dev_set_drvdata(&korina_dev0.dev, &korina_dev0_data);
return platform_add_devices(rb532_devs, ARRAY_SIZE(rb532_devs));
}
......
......@@ -345,7 +345,7 @@ void __init setup_arch(char **cmdline_p)
#ifdef CONFIG_SWIOTLB
if (ppc_swiotlb_enable)
swiotlb_init();
swiotlb_init(1);
#endif
paging_init();
......
......@@ -550,7 +550,7 @@ void __init setup_arch(char **cmdline_p)
#ifdef CONFIG_SWIOTLB
if (ppc_swiotlb_enable)
swiotlb_init();
swiotlb_init(1);
#endif
paging_init();
......
......@@ -95,6 +95,34 @@ config S390
select HAVE_ARCH_TRACEHOOK
select INIT_ALL_POSSIBLE
select HAVE_PERF_EVENTS
select ARCH_INLINE_SPIN_TRYLOCK
select ARCH_INLINE_SPIN_TRYLOCK_BH
select ARCH_INLINE_SPIN_LOCK
select ARCH_INLINE_SPIN_LOCK_BH
select ARCH_INLINE_SPIN_LOCK_IRQ
select ARCH_INLINE_SPIN_LOCK_IRQSAVE
select ARCH_INLINE_SPIN_UNLOCK
select ARCH_INLINE_SPIN_UNLOCK_BH
select ARCH_INLINE_SPIN_UNLOCK_IRQ
select ARCH_INLINE_SPIN_UNLOCK_IRQRESTORE
select ARCH_INLINE_READ_TRYLOCK
select ARCH_INLINE_READ_LOCK
select ARCH_INLINE_READ_LOCK_BH
select ARCH_INLINE_READ_LOCK_IRQ
select ARCH_INLINE_READ_LOCK_IRQSAVE
select ARCH_INLINE_READ_UNLOCK
select ARCH_INLINE_READ_UNLOCK_BH
select ARCH_INLINE_READ_UNLOCK_IRQ
select ARCH_INLINE_READ_UNLOCK_IRQRESTORE
select ARCH_INLINE_WRITE_TRYLOCK
select ARCH_INLINE_WRITE_LOCK
select ARCH_INLINE_WRITE_LOCK_BH
select ARCH_INLINE_WRITE_LOCK_IRQ
select ARCH_INLINE_WRITE_LOCK_IRQSAVE
select ARCH_INLINE_WRITE_UNLOCK
select ARCH_INLINE_WRITE_UNLOCK_BH
select ARCH_INLINE_WRITE_UNLOCK_IRQ
select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE
config SCHED_OMIT_FRAME_POINTER
bool
......
......@@ -49,7 +49,7 @@
#define BUG() do { \
__EMIT_BUG(0); \
for (;;); \
unreachable(); \
} while (0)
#define WARN_ON(x) ({ \
......
......@@ -191,33 +191,4 @@ static inline int __raw_write_trylock(raw_rwlock_t *rw)
#define _raw_read_relax(lock) cpu_relax()
#define _raw_write_relax(lock) cpu_relax()
#define __always_inline__spin_lock
#define __always_inline__read_lock
#define __always_inline__write_lock
#define __always_inline__spin_lock_bh
#define __always_inline__read_lock_bh
#define __always_inline__write_lock_bh
#define __always_inline__spin_lock_irq
#define __always_inline__read_lock_irq
#define __always_inline__write_lock_irq
#define __always_inline__spin_lock_irqsave
#define __always_inline__read_lock_irqsave
#define __always_inline__write_lock_irqsave
#define __always_inline__spin_trylock
#define __always_inline__read_trylock
#define __always_inline__write_trylock
#define __always_inline__spin_trylock_bh
#define __always_inline__spin_unlock
#define __always_inline__read_unlock
#define __always_inline__write_unlock
#define __always_inline__spin_unlock_bh
#define __always_inline__read_unlock_bh
#define __always_inline__write_unlock_bh
#define __always_inline__spin_unlock_irq
#define __always_inline__read_unlock_irq
#define __always_inline__write_unlock_irq
#define __always_inline__spin_unlock_irqrestore
#define __always_inline__read_unlock_irqrestore
#define __always_inline__write_unlock_irqrestore
#endif /* __ASM_SPINLOCK_H */
......@@ -203,73 +203,10 @@ unsigned long prepare_ftrace_return(unsigned long ip, unsigned long parent)
#ifdef CONFIG_FTRACE_SYSCALLS
extern unsigned long __start_syscalls_metadata[];
extern unsigned long __stop_syscalls_metadata[];
extern unsigned int sys_call_table[];
static struct syscall_metadata **syscalls_metadata;
struct syscall_metadata *syscall_nr_to_meta(int nr)
{
if (!syscalls_metadata || nr >= NR_syscalls || nr < 0)
return NULL;
return syscalls_metadata[nr];
}
int syscall_name_to_nr(char *name)
{
int i;
if (!syscalls_metadata)
return -1;
for (i = 0; i < NR_syscalls; i++)
if (syscalls_metadata[i])
if (!strcmp(syscalls_metadata[i]->name, name))
return i;
return -1;
}
void set_syscall_enter_id(int num, int id)
{
syscalls_metadata[num]->enter_id = id;
}
void set_syscall_exit_id(int num, int id)
unsigned long __init arch_syscall_addr(int nr)
{
syscalls_metadata[num]->exit_id = id;
}
static struct syscall_metadata *find_syscall_meta(unsigned long syscall)
{
struct syscall_metadata *start;
struct syscall_metadata *stop;
char str[KSYM_SYMBOL_LEN];
start = (struct syscall_metadata *)__start_syscalls_metadata;
stop = (struct syscall_metadata *)__stop_syscalls_metadata;
kallsyms_lookup(syscall, NULL, NULL, NULL, str);
for ( ; start < stop; start++) {
if (start->name && !strcmp(start->name + 3, str + 3))
return start;
}
return NULL;
}
static int __init arch_init_ftrace_syscalls(void)
{
struct syscall_metadata *meta;
int i;
syscalls_metadata = kzalloc(sizeof(*syscalls_metadata) * NR_syscalls,
GFP_KERNEL);
if (!syscalls_metadata)
return -ENOMEM;
for (i = 0; i < NR_syscalls; i++) {
meta = find_syscall_meta((unsigned long)sys_call_table[i]);
syscalls_metadata[i] = meta;
}
return 0;
return (unsigned long)sys_call_table[nr];
}
arch_initcall(arch_init_ftrace_syscalls);
#endif
/*
* Copyright (C) 2007-2008 Advanced Micro Devices, Inc.
* Copyright (C) 2007-2009 Advanced Micro Devices, Inc.
* Author: Joerg Roedel <joerg.roedel@amd.com>
* Leo Duran <leo.duran@amd.com>
*
......@@ -23,19 +23,13 @@
#include <linux/irqreturn.h>
#ifdef CONFIG_AMD_IOMMU
extern int amd_iommu_init(void);
extern int amd_iommu_init_dma_ops(void);
extern int amd_iommu_init_passthrough(void);
extern void amd_iommu_detect(void);
extern irqreturn_t amd_iommu_int_handler(int irq, void *data);
extern void amd_iommu_flush_all_domains(void);
extern void amd_iommu_flush_all_devices(void);
extern void amd_iommu_shutdown(void);
extern void amd_iommu_apply_erratum_63(u16 devid);
#else
static inline int amd_iommu_init(void) { return -ENODEV; }
static inline void amd_iommu_detect(void) { }
static inline void amd_iommu_shutdown(void) { }
#endif
#endif /* _ASM_X86_AMD_IOMMU_H */
/*
* Copyright (C) 2009 Advanced Micro Devices, Inc.
* Author: Joerg Roedel <joerg.roedel@amd.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#ifndef _ASM_X86_AMD_IOMMU_PROTO_H
#define _ASM_X86_AMD_IOMMU_PROTO_H
struct amd_iommu;
extern int amd_iommu_init_dma_ops(void);
extern int amd_iommu_init_passthrough(void);
extern irqreturn_t amd_iommu_int_handler(int irq, void *data);
extern void amd_iommu_flush_all_domains(void);
extern void amd_iommu_flush_all_devices(void);
extern void amd_iommu_apply_erratum_63(u16 devid);
extern void amd_iommu_reset_cmd_buffer(struct amd_iommu *iommu);
#ifndef CONFIG_AMD_IOMMU_STATS
static inline void amd_iommu_stats_init(void) { }
#endif /* !CONFIG_AMD_IOMMU_STATS */
#endif /* _ASM_X86_AMD_IOMMU_PROTO_H */
/*
* Copyright (C) 2007-2008 Advanced Micro Devices, Inc.
* Copyright (C) 2007-2009 Advanced Micro Devices, Inc.
* Author: Joerg Roedel <joerg.roedel@amd.com>
* Leo Duran <leo.duran@amd.com>
*
......@@ -24,6 +24,11 @@
#include <linux/list.h>
#include <linux/spinlock.h>
/*
* Maximum number of IOMMUs supported
*/
#define MAX_IOMMUS 32
/*
* some size calculation constants
*/
......@@ -206,6 +211,9 @@ extern bool amd_iommu_dump;
printk(KERN_INFO "AMD-Vi: " format, ## arg); \
} while(0);
/* global flag if IOMMUs cache non-present entries */
extern bool amd_iommu_np_cache;
/*
* Make iterating over all IOMMUs easier
*/
......@@ -226,6 +234,8 @@ extern bool amd_iommu_dump;
* independent of their use.
*/
struct protection_domain {
struct list_head list; /* for list of all protection domains */
struct list_head dev_list; /* List of all devices in this domain */
spinlock_t lock; /* mostly used to lock the page table*/
u16 id; /* the domain id written to the device table */
int mode; /* paging mode (0-6 levels) */
......@@ -233,7 +243,20 @@ struct protection_domain {
unsigned long flags; /* flags to find out type of domain */
bool updated; /* complete domain flush required */
unsigned dev_cnt; /* devices assigned to this domain */
unsigned dev_iommu[MAX_IOMMUS]; /* per-IOMMU reference count */
void *priv; /* private data */
};
/*
* This struct contains device specific data for the IOMMU
*/
struct iommu_dev_data {
struct list_head list; /* For domain->dev_list */
struct device *dev; /* Device this data belong to */
struct device *alias; /* The Alias Device */
struct protection_domain *domain; /* Domain the device is bound to */
atomic_t bind; /* Domain attach reverent count */
};
/*
......@@ -291,6 +314,9 @@ struct dma_ops_domain {
struct amd_iommu {
struct list_head list;
/* Index within the IOMMU array */
int index;
/* locks the accesses to the hardware */
spinlock_t lock;
......@@ -356,6 +382,21 @@ struct amd_iommu {
*/
extern struct list_head amd_iommu_list;
/*
* Array with pointers to each IOMMU struct
* The indices are referenced in the protection domains
*/
extern struct amd_iommu *amd_iommus[MAX_IOMMUS];
/* Number of IOMMUs present in the system */
extern int amd_iommus_present;
/*
* Declarations for the global list of all protection domains
*/
extern spinlock_t amd_iommu_pd_lock;
extern struct list_head amd_iommu_pd_list;
/*
* Structure defining one entry in the device table
*/
......@@ -416,15 +457,9 @@ extern unsigned amd_iommu_aperture_order;
/* largest PCI device id we expect translation requests for */
extern u16 amd_iommu_last_bdf;
/* data structures for protection domain handling */
extern struct protection_domain **amd_iommu_pd_table;
/* allocation bitmap for domain ids */
extern unsigned long *amd_iommu_pd_alloc_bitmap;
/* will be 1 if device isolation is enabled */
extern bool amd_iommu_isolate;
/*
* If true, the addresses will be flushed on unmap time, not when
* they are reused
......@@ -462,11 +497,6 @@ struct __iommu_counter {
#define ADD_STATS_COUNTER(name, x)
#define SUB_STATS_COUNTER(name, x)
static inline void amd_iommu_stats_init(void) { }
#endif /* CONFIG_AMD_IOMMU_STATS */
/* some function prototypes */
extern void amd_iommu_reset_cmd_buffer(struct amd_iommu *iommu);
#endif /* _ASM_X86_AMD_IOMMU_TYPES_H */
......@@ -22,14 +22,14 @@ do { \
".popsection" \
: : "i" (__FILE__), "i" (__LINE__), \
"i" (sizeof(struct bug_entry))); \
for (;;) ; \
unreachable(); \
} while (0)
#else
#define BUG() \
do { \
asm volatile("ud2"); \
for (;;) ; \
unreachable(); \
} while (0)
#endif
......
......@@ -62,10 +62,8 @@ struct cal_chipset_ops {
extern int use_calgary;
#ifdef CONFIG_CALGARY_IOMMU
extern int calgary_iommu_init(void);
extern void detect_calgary(void);
#else
static inline int calgary_iommu_init(void) { return 1; }
static inline void detect_calgary(void) { return; }
#endif
......
......@@ -8,7 +8,7 @@ struct dev_archdata {
#ifdef CONFIG_X86_64
struct dma_map_ops *dma_ops;
#endif
#ifdef CONFIG_DMAR
#if defined(CONFIG_DMAR) || defined(CONFIG_AMD_IOMMU)
void *iommu; /* hook for IOMMU specific extension */
#endif
};
......
......@@ -20,7 +20,8 @@
# define ISA_DMA_BIT_MASK DMA_BIT_MASK(32)
#endif
extern dma_addr_t bad_dma_address;
#define DMA_ERROR_CODE 0
extern int iommu_merge;
extern struct device x86_dma_fallback_dev;
extern int panic_on_overflow;
......@@ -48,7 +49,7 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
if (ops->mapping_error)
return ops->mapping_error(dev, dma_addr);
return (dma_addr == bad_dma_address);
return (dma_addr == DMA_ERROR_CODE);
}
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
......
......@@ -35,8 +35,7 @@ extern int gart_iommu_aperture_allowed;
extern int gart_iommu_aperture_disabled;
extern void early_gart_iommu_check(void);
extern void gart_iommu_init(void);
extern void gart_iommu_shutdown(void);
extern int gart_iommu_init(void);
extern void __init gart_parse_options(char *);
extern void gart_iommu_hole_init(void);
......@@ -48,12 +47,6 @@ extern void gart_iommu_hole_init(void);
static inline void early_gart_iommu_check(void)
{
}
static inline void gart_iommu_init(void)
{
}
static inline void gart_iommu_shutdown(void)
{
}
static inline void gart_parse_options(char *options)
{
}
......
#ifndef _ASM_X86_IOMMU_H
#define _ASM_X86_IOMMU_H
extern void pci_iommu_shutdown(void);
extern void no_iommu_init(void);
extern struct dma_map_ops nommu_dma_ops;
extern int force_iommu, no_iommu;
extern int iommu_detected;
......
......@@ -3,17 +3,14 @@
#include <linux/swiotlb.h>
/* SWIOTLB interface */
extern int swiotlb_force;
#ifdef CONFIG_SWIOTLB
extern int swiotlb;
extern void pci_swiotlb_init(void);
extern int pci_swiotlb_init(void);
#else
#define swiotlb 0
static inline void pci_swiotlb_init(void)
static inline int pci_swiotlb_init(void)
{
return 0;
}
#endif
......
......@@ -90,6 +90,14 @@ struct x86_init_timers {
void (*timer_init)(void);
};
/**
* struct x86_init_iommu - platform specific iommu setup
* @iommu_init: platform specific iommu setup
*/
struct x86_init_iommu {
int (*iommu_init)(void);
};
/**
* struct x86_init_ops - functions for platform specific setup
*
......@@ -101,6 +109,7 @@ struct x86_init_ops {
struct x86_init_oem oem;
struct x86_init_paging paging;
struct x86_init_timers timers;
struct x86_init_iommu iommu;
};
/**
......@@ -121,6 +130,7 @@ struct x86_platform_ops {
unsigned long (*calibrate_tsc)(void);
unsigned long (*get_wallclock)(void);
int (*set_wallclock)(unsigned long nowtime);
void (*iommu_shutdown)(void);
};
extern struct x86_init_ops x86_init;
......
此差异已折叠。
/*
* Copyright (C) 2007-2008 Advanced Micro Devices, Inc.
* Copyright (C) 2007-2009 Advanced Micro Devices, Inc.
* Author: Joerg Roedel <joerg.roedel@amd.com>
* Leo Duran <leo.duran@amd.com>
*
......@@ -25,10 +25,12 @@
#include <linux/interrupt.h>
#include <linux/msi.h>
#include <asm/pci-direct.h>
#include <asm/amd_iommu_proto.h>
#include <asm/amd_iommu_types.h>
#include <asm/amd_iommu.h>
#include <asm/iommu.h>
#include <asm/gart.h>
#include <asm/x86_init.h>
/*
* definitions for the ACPI scanning code
......@@ -123,18 +125,24 @@ u16 amd_iommu_last_bdf; /* largest PCI device id we have
to handle */
LIST_HEAD(amd_iommu_unity_map); /* a list of required unity mappings
we find in ACPI */
#ifdef CONFIG_IOMMU_STRESS
bool amd_iommu_isolate = false;
#else
bool amd_iommu_isolate = true; /* if true, device isolation is
enabled */
#endif
bool amd_iommu_unmap_flush; /* if true, flush on every unmap */
LIST_HEAD(amd_iommu_list); /* list of all AMD IOMMUs in the
system */
/* Array to assign indices to IOMMUs*/
struct amd_iommu *amd_iommus[MAX_IOMMUS];
int amd_iommus_present;
/* IOMMUs have a non-present cache? */
bool amd_iommu_np_cache __read_mostly;
/*
* List of protection domains - used during resume
*/
LIST_HEAD(amd_iommu_pd_list);
spinlock_t amd_iommu_pd_lock;
/*
* Pointer to the device table which is shared by all AMD IOMMUs
* it is indexed by the PCI device id or the HT unit id and contains
......@@ -156,12 +164,6 @@ u16 *amd_iommu_alias_table;
*/
struct amd_iommu **amd_iommu_rlookup_table;
/*
* The pd table (protection domain table) is used to find the protection domain
* data structure a device belongs to. Indexed with the PCI device id too.
*/
struct protection_domain **amd_iommu_pd_table;
/*
* AMD IOMMU allows up to 2^16 differend protection domains. This is a bitmap
* to know which ones are already in use.
......@@ -838,7 +840,18 @@ static void __init free_iommu_all(void)
static int __init init_iommu_one(struct amd_iommu *iommu, struct ivhd_header *h)
{
spin_lock_init(&iommu->lock);
/* Add IOMMU to internal data structures */
list_add_tail(&iommu->list, &amd_iommu_list);
iommu->index = amd_iommus_present++;
if (unlikely(iommu->index >= MAX_IOMMUS)) {
WARN(1, "AMD-Vi: System has more IOMMUs than supported by this driver\n");
return -ENOSYS;
}
/* Index is fine - add IOMMU to the array */
amd_iommus[iommu->index] = iommu;
/*
* Copy data from ACPI table entry to the iommu struct
......@@ -868,6 +881,9 @@ static int __init init_iommu_one(struct amd_iommu *iommu, struct ivhd_header *h)
init_iommu_from_acpi(iommu, h);
init_iommu_devices(iommu);
if (iommu->cap & (1UL << IOMMU_CAP_NPCACHE))
amd_iommu_np_cache = true;
return pci_enable_device(iommu->dev);
}
......@@ -925,7 +941,7 @@ static int __init init_iommu_all(struct acpi_table_header *table)
*
****************************************************************************/
static int __init iommu_setup_msi(struct amd_iommu *iommu)
static int iommu_setup_msi(struct amd_iommu *iommu)
{
int r;
......@@ -1176,19 +1192,10 @@ static struct sys_device device_amd_iommu = {
* functions. Finally it prints some information about AMD IOMMUs and
* the driver state and enables the hardware.
*/
int __init amd_iommu_init(void)
static int __init amd_iommu_init(void)
{
int i, ret = 0;
if (no_iommu) {
printk(KERN_INFO "AMD-Vi disabled by kernel command line\n");
return 0;
}
if (!amd_iommu_detected)
return -ENODEV;
/*
* First parse ACPI tables to find the largest Bus/Dev/Func
* we need to handle. Upon this information the shared data
......@@ -1225,15 +1232,6 @@ int __init amd_iommu_init(void)
if (amd_iommu_rlookup_table == NULL)
goto free;
/*
* Protection Domain table - maps devices to protection domains
* This table has the same size as the rlookup_table
*/
amd_iommu_pd_table = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
get_order(rlookup_table_size));
if (amd_iommu_pd_table == NULL)
goto free;
amd_iommu_pd_alloc_bitmap = (void *)__get_free_pages(
GFP_KERNEL | __GFP_ZERO,
get_order(MAX_DOMAIN_ID/8));
......@@ -1255,6 +1253,8 @@ int __init amd_iommu_init(void)
*/
amd_iommu_pd_alloc_bitmap[0] = 1;
spin_lock_init(&amd_iommu_pd_lock);
/*
* now the data structures are allocated and basically initialized
* start the real acpi table scan
......@@ -1286,17 +1286,12 @@ int __init amd_iommu_init(void)
if (iommu_pass_through)
goto out;
printk(KERN_INFO "AMD-Vi: device isolation ");
if (amd_iommu_isolate)
printk("enabled\n");
else
printk("disabled\n");
if (amd_iommu_unmap_flush)
printk(KERN_INFO "AMD-Vi: IO/TLB flush on unmap enabled\n");
else
printk(KERN_INFO "AMD-Vi: Lazy IO/TLB flushing enabled\n");
x86_platform.iommu_shutdown = disable_iommus;
out:
return ret;
......@@ -1304,9 +1299,6 @@ int __init amd_iommu_init(void)
free_pages((unsigned long)amd_iommu_pd_alloc_bitmap,
get_order(MAX_DOMAIN_ID/8));
free_pages((unsigned long)amd_iommu_pd_table,
get_order(rlookup_table_size));
free_pages((unsigned long)amd_iommu_rlookup_table,
get_order(rlookup_table_size));
......@@ -1323,11 +1315,6 @@ int __init amd_iommu_init(void)
goto out;
}
void amd_iommu_shutdown(void)
{
disable_iommus();
}
/****************************************************************************
*
* Early detect code. This code runs at IOMMU detection time in the DMA
......@@ -1342,16 +1329,13 @@ static int __init early_amd_iommu_detect(struct acpi_table_header *table)
void __init amd_iommu_detect(void)
{
if (swiotlb || no_iommu || (iommu_detected && !gart_iommu_aperture))
if (no_iommu || (iommu_detected && !gart_iommu_aperture))
return;
if (acpi_table_parse("IVRS", early_amd_iommu_detect) == 0) {
iommu_detected = 1;
amd_iommu_detected = 1;
#ifdef CONFIG_GART_IOMMU
gart_iommu_aperture_disabled = 1;
gart_iommu_aperture = 0;
#endif
x86_init.iommu.iommu_init = amd_iommu_init;
}
}
......@@ -1372,10 +1356,6 @@ static int __init parse_amd_iommu_dump(char *str)
static int __init parse_amd_iommu_options(char *str)
{
for (; *str; ++str) {
if (strncmp(str, "isolate", 7) == 0)
amd_iommu_isolate = true;
if (strncmp(str, "share", 5) == 0)
amd_iommu_isolate = false;
if (strncmp(str, "fullflush", 9) == 0)
amd_iommu_unmap_flush = true;
}
......
......@@ -28,6 +28,7 @@
#include <asm/pci-direct.h>
#include <asm/dma.h>
#include <asm/k8.h>
#include <asm/x86_init.h>
int gart_iommu_aperture;
int gart_iommu_aperture_disabled __initdata;
......@@ -400,6 +401,7 @@ void __init gart_iommu_hole_init(void)
iommu_detected = 1;
gart_iommu_aperture = 1;
x86_init.iommu.iommu_init = gart_iommu_init;
aper_order = (read_pci_config(bus, slot, 3, AMD64_GARTAPERTURECTL) >> 1) & 7;
aper_size = (32 * 1024 * 1024) << aper_order;
......@@ -456,7 +458,7 @@ void __init gart_iommu_hole_init(void)
if (aper_alloc) {
/* Got the aperture from the AGP bridge */
} else if (swiotlb && !valid_agp) {
} else if (!valid_agp) {
/* Do nothing */
} else if ((!no_iommu && max_pfn > MAX_DMA32_PFN) ||
force_iommu ||
......
......@@ -27,8 +27,7 @@
#include <asm/cpu.h>
#include <asm/reboot.h>
#include <asm/virtext.h>
#include <asm/iommu.h>
#include <asm/x86_init.h>
#if defined(CONFIG_SMP) && defined(CONFIG_X86_LOCAL_APIC)
......@@ -106,7 +105,7 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
#endif
#ifdef CONFIG_X86_64
pci_iommu_shutdown();
x86_platform.iommu_shutdown();
#endif
crash_save_cpu(regs, safe_smp_processor_id());
......
......@@ -1185,17 +1185,14 @@ END(ftrace_graph_caller)
.globl return_to_handler
return_to_handler:
pushl $0
pushl %eax
pushl %ecx
pushl %edx
movl %ebp, %eax
call ftrace_return_to_handler
movl %eax, 0xc(%esp)
movl %eax, %ecx
popl %edx
popl %ecx
popl %eax
ret
jmp *%ecx
#endif
.section .rodata,"a"
......
......@@ -155,11 +155,11 @@ GLOBAL(return_to_handler)
call ftrace_return_to_handler
movq %rax, 16(%rsp)
movq %rax, %rdi
movq 8(%rsp), %rdx
movq (%rsp), %rax
addq $16, %rsp
retq
addq $24, %rsp
jmp *%rdi
#endif
......
......@@ -9,6 +9,8 @@
* the dangers of modifying code on the run.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/spinlock.h>
#include <linux/hardirq.h>
#include <linux/uaccess.h>
......@@ -336,15 +338,15 @@ int __init ftrace_dyn_arch_init(void *data)
switch (faulted) {
case 0:
pr_info("ftrace: converting mcount calls to 0f 1f 44 00 00\n");
pr_info("converting mcount calls to 0f 1f 44 00 00\n");
memcpy(ftrace_nop, ftrace_test_p6nop, MCOUNT_INSN_SIZE);
break;
case 1:
pr_info("ftrace: converting mcount calls to 66 66 66 66 90\n");
pr_info("converting mcount calls to 66 66 66 66 90\n");
memcpy(ftrace_nop, ftrace_test_nop5, MCOUNT_INSN_SIZE);
break;
case 2:
pr_info("ftrace: converting mcount calls to jmp . + 5\n");
pr_info("converting mcount calls to jmp . + 5\n");
memcpy(ftrace_nop, ftrace_test_jmp, MCOUNT_INSN_SIZE);
break;
}
......@@ -468,82 +470,10 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
#ifdef CONFIG_FTRACE_SYSCALLS
extern unsigned long __start_syscalls_metadata[];
extern unsigned long __stop_syscalls_metadata[];
extern unsigned long *sys_call_table;
static struct syscall_metadata **syscalls_metadata;
static struct syscall_metadata *find_syscall_meta(unsigned long *syscall)
{
struct syscall_metadata *start;
struct syscall_metadata *stop;
char str[KSYM_SYMBOL_LEN];
start = (struct syscall_metadata *)__start_syscalls_metadata;
stop = (struct syscall_metadata *)__stop_syscalls_metadata;
kallsyms_lookup((unsigned long) syscall, NULL, NULL, NULL, str);
for ( ; start < stop; start++) {
if (start->name && !strcmp(start->name, str))
return start;
}
return NULL;
}
struct syscall_metadata *syscall_nr_to_meta(int nr)
{
if (!syscalls_metadata || nr >= NR_syscalls || nr < 0)
return NULL;
return syscalls_metadata[nr];
}
int syscall_name_to_nr(char *name)
unsigned long __init arch_syscall_addr(int nr)
{
int i;
if (!syscalls_metadata)
return -1;
for (i = 0; i < NR_syscalls; i++) {
if (syscalls_metadata[i]) {
if (!strcmp(syscalls_metadata[i]->name, name))
return i;
}
}
return -1;
}
void set_syscall_enter_id(int num, int id)
{
syscalls_metadata[num]->enter_id = id;
}
void set_syscall_exit_id(int num, int id)
{
syscalls_metadata[num]->exit_id = id;
}
static int __init arch_init_ftrace_syscalls(void)
{
int i;
struct syscall_metadata *meta;
unsigned long **psys_syscall_table = &sys_call_table;
syscalls_metadata = kzalloc(sizeof(*syscalls_metadata) *
NR_syscalls, GFP_KERNEL);
if (!syscalls_metadata) {
WARN_ON(1);
return -ENOMEM;
}
for (i = 0; i < NR_syscalls; i++) {
meta = find_syscall_meta(psys_syscall_table[i]);
syscalls_metadata[i] = meta;
}
return 0;
return (unsigned long)(&sys_call_table)[nr];
}
arch_initcall(arch_init_ftrace_syscalls);
#endif
......@@ -46,6 +46,7 @@
#include <asm/dma.h>
#include <asm/rio.h>
#include <asm/bios_ebda.h>
#include <asm/x86_init.h>
#ifdef CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT
int use_calgary __read_mostly = 1;
......@@ -244,7 +245,7 @@ static unsigned long iommu_range_alloc(struct device *dev,
if (panic_on_overflow)
panic("Calgary: fix the allocator.\n");
else
return bad_dma_address;
return DMA_ERROR_CODE;
}
}
......@@ -260,12 +261,15 @@ static dma_addr_t iommu_alloc(struct device *dev, struct iommu_table *tbl,
void *vaddr, unsigned int npages, int direction)
{
unsigned long entry;
dma_addr_t ret = bad_dma_address;
dma_addr_t ret;
entry = iommu_range_alloc(dev, tbl, npages);
if (unlikely(entry == bad_dma_address))
goto error;
if (unlikely(entry == DMA_ERROR_CODE)) {
printk(KERN_WARNING "Calgary: failed to allocate %u pages in "
"iommu %p\n", npages, tbl);
return DMA_ERROR_CODE;
}
/* set the return dma address */
ret = (entry << PAGE_SHIFT) | ((unsigned long)vaddr & ~PAGE_MASK);
......@@ -273,13 +277,7 @@ static dma_addr_t iommu_alloc(struct device *dev, struct iommu_table *tbl,
/* put the TCEs in the HW table */
tce_build(tbl, entry, npages, (unsigned long)vaddr & PAGE_MASK,
direction);
return ret;
error:
printk(KERN_WARNING "Calgary: failed to allocate %u pages in "
"iommu %p\n", npages, tbl);
return bad_dma_address;
}
static void iommu_free(struct iommu_table *tbl, dma_addr_t dma_addr,
......@@ -290,8 +288,8 @@ static void iommu_free(struct iommu_table *tbl, dma_addr_t dma_addr,
unsigned long flags;
/* were we called with bad_dma_address? */
badend = bad_dma_address + (EMERGENCY_PAGES * PAGE_SIZE);
if (unlikely((dma_addr >= bad_dma_address) && (dma_addr < badend))) {
badend = DMA_ERROR_CODE + (EMERGENCY_PAGES * PAGE_SIZE);
if (unlikely((dma_addr >= DMA_ERROR_CODE) && (dma_addr < badend))) {
WARN(1, KERN_ERR "Calgary: driver tried unmapping bad DMA "
"address 0x%Lx\n", dma_addr);
return;
......@@ -318,13 +316,15 @@ static inline struct iommu_table *find_iommu_table(struct device *dev)
pdev = to_pci_dev(dev);
/* search up the device tree for an iommu */
pbus = pdev->bus;
/* is the device behind a bridge? Look for the root bus */
while (pbus->parent)
do {
tbl = pci_iommu(pbus);
if (tbl && tbl->it_busno == pbus->number)
break;
tbl = NULL;
pbus = pbus->parent;
tbl = pci_iommu(pbus);
} while (pbus);
BUG_ON(tbl && (tbl->it_busno != pbus->number));
......@@ -373,7 +373,7 @@ static int calgary_map_sg(struct device *dev, struct scatterlist *sg,
npages = iommu_num_pages(vaddr, s->length, PAGE_SIZE);
entry = iommu_range_alloc(dev, tbl, npages);
if (entry == bad_dma_address) {
if (entry == DMA_ERROR_CODE) {
/* makes sure unmap knows to stop */
s->dma_length = 0;
goto error;
......@@ -391,7 +391,7 @@ static int calgary_map_sg(struct device *dev, struct scatterlist *sg,
error:
calgary_unmap_sg(dev, sg, nelems, dir, NULL);
for_each_sg(sg, s, nelems, i) {
sg->dma_address = bad_dma_address;
sg->dma_address = DMA_ERROR_CODE;
sg->dma_length = 0;
}
return 0;
......@@ -446,7 +446,7 @@ static void* calgary_alloc_coherent(struct device *dev, size_t size,
/* set up tces to cover the allocated range */
mapping = iommu_alloc(dev, tbl, ret, npages, DMA_BIDIRECTIONAL);
if (mapping == bad_dma_address)
if (mapping == DMA_ERROR_CODE)
goto free;
*dma_handle = mapping;
return ret;
......@@ -727,7 +727,7 @@ static void __init calgary_reserve_regions(struct pci_dev *dev)
struct iommu_table *tbl = pci_iommu(dev->bus);
/* reserve EMERGENCY_PAGES from bad_dma_address and up */
iommu_range_reserve(tbl, bad_dma_address, EMERGENCY_PAGES);
iommu_range_reserve(tbl, DMA_ERROR_CODE, EMERGENCY_PAGES);
/* avoid the BIOS/VGA first 640KB-1MB region */
/* for CalIOC2 - avoid the entire first MB */
......@@ -1344,6 +1344,23 @@ static void __init get_tce_space_from_tar(void)
return;
}
static int __init calgary_iommu_init(void)
{
int ret;
/* ok, we're trying to use Calgary - let's roll */
printk(KERN_INFO "PCI-DMA: Using Calgary IOMMU\n");
ret = calgary_init();
if (ret) {
printk(KERN_ERR "PCI-DMA: Calgary init failed %d, "
"falling back to no_iommu\n", ret);
return ret;
}
return 0;
}
void __init detect_calgary(void)
{
int bus;
......@@ -1357,7 +1374,7 @@ void __init detect_calgary(void)
* if the user specified iommu=off or iommu=soft or we found
* another HW IOMMU already, bail out.
*/
if (swiotlb || no_iommu || iommu_detected)
if (no_iommu || iommu_detected)
return;
if (!use_calgary)
......@@ -1442,9 +1459,7 @@ void __init detect_calgary(void)
printk(KERN_INFO "PCI-DMA: Calgary TCE table spec is %d\n",
specified_table_size);
/* swiotlb for devices that aren't behind the Calgary. */
if (max_pfn > MAX_DMA32_PFN)
swiotlb = 1;
x86_init.iommu.iommu_init = calgary_iommu_init;
}
return;
......@@ -1457,35 +1472,6 @@ void __init detect_calgary(void)
}
}
int __init calgary_iommu_init(void)
{
int ret;
if (no_iommu || (swiotlb && !calgary_detected))
return -ENODEV;
if (!calgary_detected)
return -ENODEV;
/* ok, we're trying to use Calgary - let's roll */
printk(KERN_INFO "PCI-DMA: Using Calgary IOMMU\n");
ret = calgary_init();
if (ret) {
printk(KERN_ERR "PCI-DMA: Calgary init failed %d, "
"falling back to no_iommu\n", ret);
return ret;
}
force_iommu = 1;
bad_dma_address = 0x0;
/* dma_ops is set to swiotlb or nommu */
if (!dma_ops)
dma_ops = &nommu_dma_ops;
return 0;
}
static int __init calgary_parse_options(char *p)
{
unsigned int bridge;
......
......@@ -11,10 +11,11 @@
#include <asm/gart.h>
#include <asm/calgary.h>
#include <asm/amd_iommu.h>
#include <asm/x86_init.h>
static int forbid_dac __read_mostly;
struct dma_map_ops *dma_ops;
struct dma_map_ops *dma_ops = &nommu_dma_ops;
EXPORT_SYMBOL(dma_ops);
static int iommu_sac_force __read_mostly;
......@@ -42,9 +43,6 @@ int iommu_detected __read_mostly = 0;
*/
int iommu_pass_through __read_mostly;
dma_addr_t bad_dma_address __read_mostly = 0;
EXPORT_SYMBOL(bad_dma_address);
/* Dummy device used for NULL arguments (normally ISA). */
struct device x86_dma_fallback_dev = {
.init_name = "fallback device",
......@@ -126,20 +124,17 @@ void __init pci_iommu_alloc(void)
/* free the range so iommu could get some range less than 4G */
dma32_free_bootmem();
#endif
if (pci_swiotlb_init())
return;
/*
* The order of these functions is important for
* fall-back/fail-over reasons
*/
gart_iommu_hole_init();
detect_calgary();
detect_intel_iommu();
/* needs to be called after gart_iommu_hole_init */
amd_iommu_detect();
pci_swiotlb_init();
}
void *dma_generic_alloc_coherent(struct device *dev, size_t size,
......@@ -214,7 +209,7 @@ static __init int iommu_setup(char *p)
if (!strncmp(p, "allowdac", 8))
forbid_dac = 0;
if (!strncmp(p, "nodac", 5))
forbid_dac = -1;
forbid_dac = 1;
if (!strncmp(p, "usedac", 6)) {
forbid_dac = -1;
return 1;
......@@ -289,25 +284,17 @@ static int __init pci_iommu_init(void)
#ifdef CONFIG_PCI
dma_debug_add_bus(&pci_bus_type);
#endif
x86_init.iommu.iommu_init();
calgary_iommu_init();
intel_iommu_init();
if (swiotlb) {
printk(KERN_INFO "PCI-DMA: "
"Using software bounce buffering for IO (SWIOTLB)\n");
swiotlb_print_info();
} else
swiotlb_free();
amd_iommu_init();
gart_iommu_init();
no_iommu_init();
return 0;
}
void pci_iommu_shutdown(void)
{
gart_iommu_shutdown();
amd_iommu_shutdown();
}
/* Must execute after PCI subsystem */
rootfs_initcall(pci_iommu_init);
......
......@@ -39,6 +39,7 @@
#include <asm/swiotlb.h>
#include <asm/dma.h>
#include <asm/k8.h>
#include <asm/x86_init.h>
static unsigned long iommu_bus_base; /* GART remapping area (physical) */
static unsigned long iommu_size; /* size of remapping area bytes */
......@@ -46,6 +47,8 @@ static unsigned long iommu_pages; /* .. and in pages */
static u32 *iommu_gatt_base; /* Remapping table */
static dma_addr_t bad_dma_addr;
/*
* If this is disabled the IOMMU will use an optimized flushing strategy
* of only flushing when an mapping is reused. With it true the GART is
......@@ -92,7 +95,7 @@ static unsigned long alloc_iommu(struct device *dev, int size,
base_index = ALIGN(iommu_bus_base & dma_get_seg_boundary(dev),
PAGE_SIZE) >> PAGE_SHIFT;
boundary_size = ALIGN((unsigned long long)dma_get_seg_boundary(dev) + 1,
boundary_size = ALIGN((u64)dma_get_seg_boundary(dev) + 1,
PAGE_SIZE) >> PAGE_SHIFT;
spin_lock_irqsave(&iommu_bitmap_lock, flags);
......@@ -216,7 +219,7 @@ static dma_addr_t dma_map_area(struct device *dev, dma_addr_t phys_mem,
if (panic_on_overflow)
panic("dma_map_area overflow %lu bytes\n", size);
iommu_full(dev, size, dir);
return bad_dma_address;
return bad_dma_addr;
}
for (i = 0; i < npages; i++) {
......@@ -294,7 +297,7 @@ static int dma_map_sg_nonforce(struct device *dev, struct scatterlist *sg,
int i;
#ifdef CONFIG_IOMMU_DEBUG
printk(KERN_DEBUG "dma_map_sg overflow\n");
pr_debug("dma_map_sg overflow\n");
#endif
for_each_sg(sg, s, nents, i) {
......@@ -302,7 +305,7 @@ static int dma_map_sg_nonforce(struct device *dev, struct scatterlist *sg,
if (nonforced_iommu(dev, addr, s->length)) {
addr = dma_map_area(dev, addr, s->length, dir, 0);
if (addr == bad_dma_address) {
if (addr == bad_dma_addr) {
if (i > 0)
gart_unmap_sg(dev, sg, i, dir, NULL);
nents = 0;
......@@ -389,12 +392,14 @@ static int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents,
if (!dev)
dev = &x86_dma_fallback_dev;
out = 0;
start = 0;
start_sg = sgmap = sg;
seg_size = 0;
max_seg_size = dma_get_max_seg_size(dev);
ps = NULL; /* shut up gcc */
out = 0;
start = 0;
start_sg = sg;
sgmap = sg;
seg_size = 0;
max_seg_size = dma_get_max_seg_size(dev);
ps = NULL; /* shut up gcc */
for_each_sg(sg, s, nents, i) {
dma_addr_t addr = sg_phys(s);
......@@ -417,11 +422,12 @@ static int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents,
sgmap, pages, need) < 0)
goto error;
out++;
seg_size = 0;
sgmap = sg_next(sgmap);
pages = 0;
start = i;
start_sg = s;
seg_size = 0;
sgmap = sg_next(sgmap);
pages = 0;
start = i;
start_sg = s;
}
}
......@@ -455,7 +461,7 @@ static int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents,
iommu_full(dev, pages << PAGE_SHIFT, dir);
for_each_sg(sg, s, nents, i)
s->dma_address = bad_dma_address;
s->dma_address = bad_dma_addr;
return 0;
}
......@@ -479,7 +485,7 @@ gart_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_addr,
DMA_BIDIRECTIONAL, align_mask);
flush_gart();
if (paddr != bad_dma_address) {
if (paddr != bad_dma_addr) {
*dma_addr = paddr;
return page_address(page);
}
......@@ -499,6 +505,11 @@ gart_free_coherent(struct device *dev, size_t size, void *vaddr,
free_pages((unsigned long)vaddr, get_order(size));
}
static int gart_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return (dma_addr == bad_dma_addr);
}
static int no_agp;
static __init unsigned long check_iommu_size(unsigned long aper, u64 aper_size)
......@@ -515,7 +526,7 @@ static __init unsigned long check_iommu_size(unsigned long aper, u64 aper_size)
iommu_size -= round_up(a, PMD_PAGE_SIZE) - a;
if (iommu_size < 64*1024*1024) {
printk(KERN_WARNING
pr_warning(
"PCI-DMA: Warning: Small IOMMU %luMB."
" Consider increasing the AGP aperture in BIOS\n",
iommu_size >> 20);
......@@ -570,28 +581,32 @@ void set_up_gart_resume(u32 aper_order, u32 aper_alloc)
aperture_alloc = aper_alloc;
}
static int gart_resume(struct sys_device *dev)
static void gart_fixup_northbridges(struct sys_device *dev)
{
printk(KERN_INFO "PCI-DMA: Resuming GART IOMMU\n");
int i;
if (fix_up_north_bridges) {
int i;
if (!fix_up_north_bridges)
return;
printk(KERN_INFO "PCI-DMA: Restoring GART aperture settings\n");
pr_info("PCI-DMA: Restoring GART aperture settings\n");
for (i = 0; i < num_k8_northbridges; i++) {
struct pci_dev *dev = k8_northbridges[i];
for (i = 0; i < num_k8_northbridges; i++) {
struct pci_dev *dev = k8_northbridges[i];
/*
* Don't enable translations just yet. That is the next
* step. Restore the pre-suspend aperture settings.
*/
pci_write_config_dword(dev, AMD64_GARTAPERTURECTL,
aperture_order << 1);
pci_write_config_dword(dev, AMD64_GARTAPERTUREBASE,
aperture_alloc >> 25);
}
/*
* Don't enable translations just yet. That is the next
* step. Restore the pre-suspend aperture settings.
*/
pci_write_config_dword(dev, AMD64_GARTAPERTURECTL, aperture_order << 1);
pci_write_config_dword(dev, AMD64_GARTAPERTUREBASE, aperture_alloc >> 25);
}
}
static int gart_resume(struct sys_device *dev)
{
pr_info("PCI-DMA: Resuming GART IOMMU\n");
gart_fixup_northbridges(dev);
enable_gart_translations();
......@@ -604,15 +619,14 @@ static int gart_suspend(struct sys_device *dev, pm_message_t state)
}
static struct sysdev_class gart_sysdev_class = {
.name = "gart",
.suspend = gart_suspend,
.resume = gart_resume,
.name = "gart",
.suspend = gart_suspend,
.resume = gart_resume,
};
static struct sys_device device_gart = {
.id = 0,
.cls = &gart_sysdev_class,
.cls = &gart_sysdev_class,
};
/*
......@@ -627,7 +641,8 @@ static __init int init_k8_gatt(struct agp_kern_info *info)
void *gatt;
int i, error;
printk(KERN_INFO "PCI-DMA: Disabling AGP.\n");
pr_info("PCI-DMA: Disabling AGP.\n");
aper_size = aper_base = info->aper_size = 0;
dev = NULL;
for (i = 0; i < num_k8_northbridges; i++) {
......@@ -645,6 +660,7 @@ static __init int init_k8_gatt(struct agp_kern_info *info)
}
if (!aper_base)
goto nommu;
info->aper_base = aper_base;
info->aper_size = aper_size >> 20;
......@@ -667,14 +683,14 @@ static __init int init_k8_gatt(struct agp_kern_info *info)
flush_gart();
printk(KERN_INFO "PCI-DMA: aperture base @ %x size %u KB\n",
pr_info("PCI-DMA: aperture base @ %x size %u KB\n",
aper_base, aper_size>>10);
return 0;
nommu:
/* Should not happen anymore */
printk(KERN_WARNING "PCI-DMA: More than 4GB of RAM and no IOMMU\n"
pr_warning("PCI-DMA: More than 4GB of RAM and no IOMMU\n"
"falling back to iommu=soft.\n");
return -1;
}
......@@ -686,14 +702,15 @@ static struct dma_map_ops gart_dma_ops = {
.unmap_page = gart_unmap_page,
.alloc_coherent = gart_alloc_coherent,
.free_coherent = gart_free_coherent,
.mapping_error = gart_mapping_error,
};
void gart_iommu_shutdown(void)
static void gart_iommu_shutdown(void)
{
struct pci_dev *dev;
int i;
if (no_agp && (dma_ops != &gart_dma_ops))
if (no_agp)
return;
for (i = 0; i < num_k8_northbridges; i++) {
......@@ -708,7 +725,7 @@ void gart_iommu_shutdown(void)
}
}
void __init gart_iommu_init(void)
int __init gart_iommu_init(void)
{
struct agp_kern_info info;
unsigned long iommu_start;
......@@ -718,7 +735,7 @@ void __init gart_iommu_init(void)
long i;
if (cache_k8_northbridges() < 0 || num_k8_northbridges == 0)
return;
return 0;
#ifndef CONFIG_AGP_AMD64
no_agp = 1;
......@@ -730,35 +747,28 @@ void __init gart_iommu_init(void)
(agp_copy_info(agp_bridge, &info) < 0);
#endif
if (swiotlb)
return;
/* Did we detect a different HW IOMMU? */
if (iommu_detected && !gart_iommu_aperture)
return;
if (no_iommu ||
(!force_iommu && max_pfn <= MAX_DMA32_PFN) ||
!gart_iommu_aperture ||
(no_agp && init_k8_gatt(&info) < 0)) {
if (max_pfn > MAX_DMA32_PFN) {
printk(KERN_WARNING "More than 4GB of memory "
"but GART IOMMU not available.\n");
printk(KERN_WARNING "falling back to iommu=soft.\n");
pr_warning("More than 4GB of memory but GART IOMMU not available.\n");
pr_warning("falling back to iommu=soft.\n");
}
return;
return 0;
}
/* need to map that range */
aper_size = info.aper_size << 20;
aper_base = info.aper_base;
end_pfn = (aper_base>>PAGE_SHIFT) + (aper_size>>PAGE_SHIFT);
aper_size = info.aper_size << 20;
aper_base = info.aper_base;
end_pfn = (aper_base>>PAGE_SHIFT) + (aper_size>>PAGE_SHIFT);
if (end_pfn > max_low_pfn_mapped) {
start_pfn = (aper_base>>PAGE_SHIFT);
init_memory_mapping(start_pfn<<PAGE_SHIFT, end_pfn<<PAGE_SHIFT);
}
printk(KERN_INFO "PCI-DMA: using GART IOMMU.\n");
pr_info("PCI-DMA: using GART IOMMU.\n");
iommu_size = check_iommu_size(info.aper_base, aper_size);
iommu_pages = iommu_size >> PAGE_SHIFT;
......@@ -773,8 +783,7 @@ void __init gart_iommu_init(void)
ret = dma_debug_resize_entries(iommu_pages);
if (ret)
printk(KERN_DEBUG
"PCI-DMA: Cannot trace all the entries\n");
pr_debug("PCI-DMA: Cannot trace all the entries\n");
}
#endif
......@@ -784,15 +793,14 @@ void __init gart_iommu_init(void)
*/
iommu_area_reserve(iommu_gart_bitmap, 0, EMERGENCY_PAGES);
agp_memory_reserved = iommu_size;
printk(KERN_INFO
"PCI-DMA: Reserving %luMB of IOMMU area in the AGP aperture\n",
pr_info("PCI-DMA: Reserving %luMB of IOMMU area in the AGP aperture\n",
iommu_size >> 20);
iommu_start = aper_size - iommu_size;
iommu_bus_base = info.aper_base + iommu_start;
bad_dma_address = iommu_bus_base;
iommu_gatt_base = agp_gatt_table + (iommu_start>>PAGE_SHIFT);
agp_memory_reserved = iommu_size;
iommu_start = aper_size - iommu_size;
iommu_bus_base = info.aper_base + iommu_start;
bad_dma_addr = iommu_bus_base;
iommu_gatt_base = agp_gatt_table + (iommu_start>>PAGE_SHIFT);
/*
* Unmap the IOMMU part of the GART. The alias of the page is
......@@ -814,7 +822,7 @@ void __init gart_iommu_init(void)
* the pages as Not-Present:
*/
wbinvd();
/*
* Now all caches are flushed and we can safely enable
* GART hardware. Doing it early leaves the possibility
......@@ -838,6 +846,10 @@ void __init gart_iommu_init(void)
flush_gart();
dma_ops = &gart_dma_ops;
x86_platform.iommu_shutdown = gart_iommu_shutdown;
swiotlb = 0;
return 0;
}
void __init gart_parse_options(char *p)
......@@ -856,7 +868,7 @@ void __init gart_parse_options(char *p)
#endif
if (isdigit(*p) && get_option(&p, &arg))
iommu_size = arg;
if (!strncmp(p, "fullflush", 8))
if (!strncmp(p, "fullflush", 9))
iommu_fullflush = 1;
if (!strncmp(p, "nofullflush", 11))
iommu_fullflush = 0;
......
......@@ -33,7 +33,7 @@ static dma_addr_t nommu_map_page(struct device *dev, struct page *page,
dma_addr_t bus = page_to_phys(page) + offset;
WARN_ON(size == 0);
if (!check_addr("map_single", dev, bus, size))
return bad_dma_address;
return DMA_ERROR_CODE;
flush_write_buffers();
return bus;
}
......@@ -103,12 +103,3 @@ struct dma_map_ops nommu_dma_ops = {
.sync_sg_for_device = nommu_sync_sg_for_device,
.is_phys = 1,
};
void __init no_iommu_init(void)
{
if (dma_ops)
return;
force_iommu = 0; /* no HW IOMMU */
dma_ops = &nommu_dma_ops;
}
......@@ -42,18 +42,28 @@ static struct dma_map_ops swiotlb_dma_ops = {
.dma_supported = NULL,
};
void __init pci_swiotlb_init(void)
/*
* pci_swiotlb_init - initialize swiotlb if necessary
*
* This returns non-zero if we are forced to use swiotlb (by the boot
* option).
*/
int __init pci_swiotlb_init(void)
{
int use_swiotlb = swiotlb | swiotlb_force;
/* don't initialize swiotlb if iommu=off (no_iommu=1) */
#ifdef CONFIG_X86_64
if ((!iommu_detected && !no_iommu && max_pfn > MAX_DMA32_PFN))
if (!no_iommu && max_pfn > MAX_DMA32_PFN)
swiotlb = 1;
#endif
if (swiotlb_force)
swiotlb = 1;
if (swiotlb) {
printk(KERN_INFO "PCI-DMA: Using software bounce buffering for IO (SWIOTLB)\n");
swiotlb_init();
swiotlb_init(0);
dma_ops = &swiotlb_dma_ops;
}
return use_swiotlb;
}
......@@ -23,7 +23,7 @@
# include <linux/ctype.h>
# include <linux/mc146818rtc.h>
#else
# include <asm/iommu.h>
# include <asm/x86_init.h>
#endif
/*
......@@ -622,7 +622,7 @@ void native_machine_shutdown(void)
#endif
#ifdef CONFIG_X86_64
pci_iommu_shutdown();
x86_platform.iommu_shutdown();
#endif
}
......
......@@ -14,10 +14,13 @@
#include <asm/time.h>
#include <asm/irq.h>
#include <asm/tsc.h>
#include <asm/iommu.h>
void __cpuinit x86_init_noop(void) { }
void __init x86_init_uint_noop(unsigned int unused) { }
void __init x86_init_pgd_noop(pgd_t *unused) { }
int __init iommu_init_noop(void) { return 0; }
void iommu_shutdown_noop(void) { }
/*
* The platform setup functions are preset with the default functions
......@@ -62,6 +65,10 @@ struct x86_init_ops x86_init __initdata = {
.tsc_pre_init = x86_init_noop,
.timer_init = hpet_time_init,
},
.iommu = {
.iommu_init = iommu_init_noop,
},
};
struct x86_cpuinit_ops x86_cpuinit __cpuinitdata = {
......@@ -72,4 +79,5 @@ struct x86_platform_ops x86_platform = {
.calibrate_tsc = native_calibrate_tsc,
.get_wallclock = mach_get_cmos_time,
.set_wallclock = mach_set_rtc_mmss,
.iommu_shutdown = iommu_shutdown_noop,
};
/*
* Written by Pekka Paalanen, 2008-2009 <pq@iki.fi>
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/module.h>
#include <linux/io.h>
#include <linux/mmiotrace.h>
#define MODULE_NAME "testmmiotrace"
static unsigned long mmio_address;
module_param(mmio_address, ulong, 0);
MODULE_PARM_DESC(mmio_address, " Start address of the mapping of 16 kB "
......@@ -30,7 +31,7 @@ static unsigned v32(unsigned i)
static void do_write_test(void __iomem *p)
{
unsigned int i;
pr_info(MODULE_NAME ": write test.\n");
pr_info("write test.\n");
mmiotrace_printk("Write test.\n");
for (i = 0; i < 256; i++)
......@@ -47,7 +48,7 @@ static void do_read_test(void __iomem *p)
{
unsigned int i;
unsigned errs[3] = { 0 };
pr_info(MODULE_NAME ": read test.\n");
pr_info("read test.\n");
mmiotrace_printk("Read test.\n");
for (i = 0; i < 256; i++)
......@@ -68,7 +69,7 @@ static void do_read_test(void __iomem *p)
static void do_read_far_test(void __iomem *p)
{
pr_info(MODULE_NAME ": read far test.\n");
pr_info("read far test.\n");
mmiotrace_printk("Read far test.\n");
ioread32(p + read_far);
......@@ -78,7 +79,7 @@ static void do_test(unsigned long size)
{
void __iomem *p = ioremap_nocache(mmio_address, size);
if (!p) {
pr_err(MODULE_NAME ": could not ioremap, aborting.\n");
pr_err("could not ioremap, aborting.\n");
return;
}
mmiotrace_printk("ioremap returned %p.\n", p);
......@@ -94,24 +95,22 @@ static int __init init(void)
unsigned long size = (read_far) ? (8 << 20) : (16 << 10);
if (mmio_address == 0) {
pr_err(MODULE_NAME ": you have to use the module argument "
"mmio_address.\n");
pr_err(MODULE_NAME ": DO NOT LOAD THIS MODULE UNLESS"
" YOU REALLY KNOW WHAT YOU ARE DOING!\n");
pr_err("you have to use the module argument mmio_address.\n");
pr_err("DO NOT LOAD THIS MODULE UNLESS YOU REALLY KNOW WHAT YOU ARE DOING!\n");
return -ENXIO;
}
pr_warning(MODULE_NAME ": WARNING: mapping %lu kB @ 0x%08lx in PCI "
"address space, and writing 16 kB of rubbish in there.\n",
size >> 10, mmio_address);
pr_warning("WARNING: mapping %lu kB @ 0x%08lx in PCI address space, "
"and writing 16 kB of rubbish in there.\n",
size >> 10, mmio_address);
do_test(size);
pr_info(MODULE_NAME ": All done.\n");
pr_info("All done.\n");
return 0;
}
static void __exit cleanup(void)
{
pr_debug(MODULE_NAME ": unloaded.\n");
pr_debug("unloaded.\n");
}
module_init(init);
......
......@@ -177,9 +177,6 @@ static struct ata_port_operations pcmcia_8bit_port_ops = {
.drain_fifo = pcmcia_8bit_drain_fifo,
};
#define CS_CHECK(fn, ret) \
do { last_fn = (fn); if ((last_ret = (ret)) != 0) goto cs_failed; } while (0)
struct pcmcia_config_check {
unsigned long ctl_base;
......@@ -252,7 +249,7 @@ static int pcmcia_init_one(struct pcmcia_device *pdev)
struct ata_port *ap;
struct ata_pcmcia_info *info;
struct pcmcia_config_check *stk = NULL;
int last_ret = 0, last_fn = 0, is_kme = 0, ret = -ENOMEM, p;
int is_kme = 0, ret = -ENOMEM, p;
unsigned long io_base, ctl_base;
void __iomem *io_addr, *ctl_addr;
int n_ports = 1;
......@@ -271,7 +268,6 @@ static int pcmcia_init_one(struct pcmcia_device *pdev)
pdev->io.Attributes2 = IO_DATA_PATH_WIDTH_8;
pdev->io.IOAddrLines = 3;
pdev->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING;
pdev->irq.IRQInfo1 = IRQ_LEVEL_ID;
pdev->conf.Attributes = CONF_ENABLE_IRQ;
pdev->conf.IntType = INT_MEMORY_AND_IO;
......@@ -296,8 +292,13 @@ static int pcmcia_init_one(struct pcmcia_device *pdev)
}
io_base = pdev->io.BasePort1;
ctl_base = stk->ctl_base;
CS_CHECK(RequestIRQ, pcmcia_request_irq(pdev, &pdev->irq));
CS_CHECK(RequestConfiguration, pcmcia_request_configuration(pdev, &pdev->conf));
ret = pcmcia_request_irq(pdev, &pdev->irq);
if (ret)
goto failed;
ret = pcmcia_request_configuration(pdev, &pdev->conf);
if (ret)
goto failed;
/* iomap */
ret = -ENOMEM;
......@@ -351,8 +352,6 @@ static int pcmcia_init_one(struct pcmcia_device *pdev)
kfree(stk);
return 0;
cs_failed:
cs_error(pdev, last_fn, last_ret);
failed:
kfree(stk);
info->ndev = 0;
......
......@@ -735,6 +735,21 @@ diskstats(struct gendisk *disk, struct bio *bio, ulong duration, sector_t sector
part_stat_unlock();
}
/*
* Ensure we don't create aliases in VI caches
*/
static inline void
killalias(struct bio *bio)
{
struct bio_vec *bv;
int i;
if (bio_data_dir(bio) == READ)
__bio_for_each_segment(bv, bio, i, 0) {
flush_dcache_page(bv->bv_page);
}
}
void
aoecmd_ata_rsp(struct sk_buff *skb)
{
......@@ -853,8 +868,12 @@ aoecmd_ata_rsp(struct sk_buff *skb)
if (buf && --buf->nframesout == 0 && buf->resid == 0) {
diskstats(d->gd, buf->bio, jiffies - buf->stime, buf->sector);
n = (buf->flags & BUFFL_FAIL) ? -EIO : 0;
bio_endio(buf->bio, n);
if (buf->flags & BUFFL_FAIL)
bio_endio(buf->bio, -EIO);
else {
killalias(buf->bio);
bio_endio(buf->bio, 0);
}
mempool_free(buf, d->bufpool);
}
......
......@@ -867,11 +867,9 @@ static int bluecard_probe(struct pcmcia_device *link)
link->io.Attributes1 = IO_DATA_PATH_WIDTH_8;
link->io.NumPorts1 = 8;
link->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING | IRQ_HANDLE_PRESENT;
link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING;
link->irq.Handler = bluecard_interrupt;
link->irq.Instance = info;
link->conf.Attributes = CONF_ENABLE_IRQ;
link->conf.IntType = INT_MEMORY_AND_IO;
......@@ -905,22 +903,16 @@ static int bluecard_config(struct pcmcia_device *link)
break;
}
if (i != 0) {
cs_error(link, RequestIO, i);
if (i != 0)
goto failed;
}
i = pcmcia_request_irq(link, &link->irq);
if (i != 0) {
cs_error(link, RequestIRQ, i);
if (i != 0)
link->irq.AssignedIRQ = 0;
}
i = pcmcia_request_configuration(link, &link->conf);
if (i != 0) {
cs_error(link, RequestConfiguration, i);
if (i != 0)
goto failed;
}
if (bluecard_open(info) != 0)
goto failed;
......
......@@ -659,11 +659,9 @@ static int bt3c_probe(struct pcmcia_device *link)
link->io.Attributes1 = IO_DATA_PATH_WIDTH_8;
link->io.NumPorts1 = 8;
link->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING | IRQ_HANDLE_PRESENT;
link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING;
link->irq.Handler = bt3c_interrupt;
link->irq.Instance = info;
link->conf.Attributes = CONF_ENABLE_IRQ;
link->conf.IntType = INT_MEMORY_AND_IO;
......@@ -740,21 +738,16 @@ static int bt3c_config(struct pcmcia_device *link)
goto found_port;
BT_ERR("No usable port range found");
cs_error(link, RequestIO, -ENODEV);
goto failed;
found_port:
i = pcmcia_request_irq(link, &link->irq);
if (i != 0) {
cs_error(link, RequestIRQ, i);
if (i != 0)
link->irq.AssignedIRQ = 0;
}
i = pcmcia_request_configuration(link, &link->conf);
if (i != 0) {
cs_error(link, RequestConfiguration, i);
if (i != 0)
goto failed;
}
if (bt3c_open(info) != 0)
goto failed;
......
......@@ -588,11 +588,9 @@ static int btuart_probe(struct pcmcia_device *link)
link->io.Attributes1 = IO_DATA_PATH_WIDTH_8;
link->io.NumPorts1 = 8;
link->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING | IRQ_HANDLE_PRESENT;
link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING;
link->irq.Handler = btuart_interrupt;
link->irq.Instance = info;
link->conf.Attributes = CONF_ENABLE_IRQ;
link->conf.IntType = INT_MEMORY_AND_IO;
......@@ -669,21 +667,16 @@ static int btuart_config(struct pcmcia_device *link)
goto found_port;
BT_ERR("No usable port range found");
cs_error(link, RequestIO, -ENODEV);
goto failed;
found_port:
i = pcmcia_request_irq(link, &link->irq);
if (i != 0) {
cs_error(link, RequestIRQ, i);
if (i != 0)
link->irq.AssignedIRQ = 0;
}
i = pcmcia_request_configuration(link, &link->conf);
if (i != 0) {
cs_error(link, RequestConfiguration, i);
if (i != 0)
goto failed;
}
if (btuart_open(info) != 0)
goto failed;
......
......@@ -573,11 +573,9 @@ static int dtl1_probe(struct pcmcia_device *link)
link->io.Attributes1 = IO_DATA_PATH_WIDTH_8;
link->io.NumPorts1 = 8;
link->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING | IRQ_HANDLE_PRESENT;
link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING;
link->irq.Handler = dtl1_interrupt;
link->irq.Instance = info;
link->conf.Attributes = CONF_ENABLE_IRQ;
link->conf.IntType = INT_MEMORY_AND_IO;
......@@ -622,16 +620,12 @@ static int dtl1_config(struct pcmcia_device *link)
goto failed;
i = pcmcia_request_irq(link, &link->irq);
if (i != 0) {
cs_error(link, RequestIRQ, i);
if (i != 0)
link->irq.AssignedIRQ = 0;
}
i = pcmcia_request_configuration(link, &link->conf);
if (i != 0) {
cs_error(link, RequestConfiguration, i);
if (i != 0)
goto failed;
}
if (dtl1_open(info) != 0)
goto failed;
......
......@@ -56,9 +56,8 @@ config AGP_AMD
X on AMD Irongate, 761, and 762 chipsets.
config AGP_AMD64
tristate "AMD Opteron/Athlon64 on-CPU GART support" if !GART_IOMMU
tristate "AMD Opteron/Athlon64 on-CPU GART support"
depends on AGP && X86
default y if GART_IOMMU
help
This option gives you AGP support for the GLX component of
X using the on-CPU northbridge of the AMD Athlon64/Opteron CPUs.
......
......@@ -23,8 +23,6 @@
* All rights reserved. Licensed under dual BSD/GPL license.
*/
/* #define PCMCIA_DEBUG 6 */
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
......@@ -47,18 +45,17 @@
/* #define ATR_CSUM */
#ifdef PCMCIA_DEBUG
#define reader_to_dev(x) (&handle_to_dev(x->p_dev))
static int pc_debug = PCMCIA_DEBUG;
module_param(pc_debug, int, 0600);
#define DEBUGP(n, rdr, x, args...) do { \
if (pc_debug >= (n)) \
dev_printk(KERN_DEBUG, reader_to_dev(rdr), "%s:" x, \
__func__ , ## args); \
#define reader_to_dev(x) (&x->p_dev->dev)
/* n (debug level) is ignored */
/* additional debug output may be enabled by re-compiling with
* CM4000_DEBUG set */
/* #define CM4000_DEBUG */
#define DEBUGP(n, rdr, x, args...) do { \
dev_dbg(reader_to_dev(rdr), "%s:" x, \
__func__ , ## args); \
} while (0)
#else
#define DEBUGP(n, rdr, x, args...)
#endif
static char *version = "cm4000_cs.c v2.4.0gm6 - All bugs added by Harald Welte";
#define T_1SEC (HZ)
......@@ -174,14 +171,13 @@ static unsigned char fi_di_table[10][14] = {
/* 9 */ {0x09,0x19,0x29,0x39,0x49,0x59,0x69,0x11,0x11,0x99,0xA9,0xB9,0xC9,0xD9}
};
#ifndef PCMCIA_DEBUG
#ifndef CM4000_DEBUG
#define xoutb outb
#define xinb inb
#else
static inline void xoutb(unsigned char val, unsigned short port)
{
if (pc_debug >= 7)
printk(KERN_DEBUG "outb(val=%.2x,port=%.4x)\n", val, port);
pr_debug("outb(val=%.2x,port=%.4x)\n", val, port);
outb(val, port);
}
static inline unsigned char xinb(unsigned short port)
......@@ -189,8 +185,7 @@ static inline unsigned char xinb(unsigned short port)
unsigned char val;
val = inb(port);
if (pc_debug >= 7)
printk(KERN_DEBUG "%.2x=inb(%.4x)\n", val, port);
pr_debug("%.2x=inb(%.4x)\n", val, port);
return val;
}
......@@ -514,12 +509,10 @@ static int set_protocol(struct cm4000_dev *dev, struct ptsreq *ptsreq)
for (i = 0; i < 4; i++) {
xoutb(i, REG_BUF_ADDR(iobase));
xoutb(dev->pts[i], REG_BUF_DATA(iobase)); /* buf data */
#ifdef PCMCIA_DEBUG
if (pc_debug >= 5)
printk("0x%.2x ", dev->pts[i]);
#ifdef CM4000_DEBUG
pr_debug("0x%.2x ", dev->pts[i]);
}
if (pc_debug >= 5)
printk("\n");
pr_debug("\n");
#else
}
#endif
......@@ -579,14 +572,13 @@ static int set_protocol(struct cm4000_dev *dev, struct ptsreq *ptsreq)
pts_reply[i] = inb(REG_BUF_DATA(iobase));
}
#ifdef PCMCIA_DEBUG
#ifdef CM4000_DEBUG
DEBUGP(2, dev, "PTSreply: ");
for (i = 0; i < num_bytes_read; i++) {
if (pc_debug >= 5)
printk("0x%.2x ", pts_reply[i]);
pr_debug("0x%.2x ", pts_reply[i]);
}
printk("\n");
#endif /* PCMCIA_DEBUG */
pr_debug("\n");
#endif /* CM4000_DEBUG */
DEBUGP(5, dev, "Clear Tactive in Flags1\n");
xoutb(0x20, REG_FLAGS1(iobase));
......@@ -655,7 +647,7 @@ static void terminate_monitor(struct cm4000_dev *dev)
DEBUGP(5, dev, "Delete timer\n");
del_timer_sync(&dev->timer);
#ifdef PCMCIA_DEBUG
#ifdef CM4000_DEBUG
dev->monitor_running = 0;
#endif
......@@ -898,7 +890,7 @@ static void monitor_card(unsigned long p)
DEBUGP(4, dev, "ATR checksum (0x%.2x, should "
"be zero) failed\n", dev->atr_csum);
}
#ifdef PCMCIA_DEBUG
#ifdef CM4000_DEBUG
else if (test_bit(IS_BAD_LENGTH, &dev->flags)) {
DEBUGP(4, dev, "ATR length error\n");
} else {
......@@ -1415,7 +1407,7 @@ static long cmm_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
int size;
int rc;
void __user *argp = (void __user *)arg;
#ifdef PCMCIA_DEBUG
#ifdef CM4000_DEBUG
char *ioctl_names[CM_IOC_MAXNR + 1] = {
[_IOC_NR(CM_IOCGSTATUS)] "CM_IOCGSTATUS",
[_IOC_NR(CM_IOCGATR)] "CM_IOCGATR",
......@@ -1423,9 +1415,9 @@ static long cmm_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
[_IOC_NR(CM_IOCSPTS)] "CM_IOCSPTS",
[_IOC_NR(CM_IOSDBGLVL)] "CM4000_DBGLVL",
};
#endif
DEBUGP(3, dev, "cmm_ioctl(device=%d.%d) %s\n", imajor(inode),
iminor(inode), ioctl_names[_IOC_NR(cmd)]);
#endif
lock_kernel();
rc = -ENODEV;
......@@ -1523,7 +1515,7 @@ static long cmm_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
}
case CM_IOCARDOFF:
#ifdef PCMCIA_DEBUG
#ifdef CM4000_DEBUG
DEBUGP(4, dev, "... in CM_IOCARDOFF\n");
if (dev->flags0 & 0x01) {
DEBUGP(4, dev, " Card inserted\n");
......@@ -1625,18 +1617,9 @@ static long cmm_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
}
break;
#ifdef PCMCIA_DEBUG
case CM_IOSDBGLVL: /* set debug log level */
{
int old_pc_debug = 0;
old_pc_debug = pc_debug;
if (copy_from_user(&pc_debug, argp, sizeof(int)))
rc = -EFAULT;
else if (old_pc_debug != pc_debug)
DEBUGP(0, dev, "Changed debug log level "
"to %i\n", pc_debug);
}
#ifdef CM4000_DEBUG
case CM_IOSDBGLVL:
rc = -ENOTTY;
break;
#endif
default:
......
......@@ -17,8 +17,6 @@
* All rights reserved, Dual BSD/GPL Licensed.
*/
/* #define PCMCIA_DEBUG 6 */
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
......@@ -41,18 +39,16 @@
#include "cm4040_cs.h"
#ifdef PCMCIA_DEBUG
#define reader_to_dev(x) (&handle_to_dev(x->p_dev))
static int pc_debug = PCMCIA_DEBUG;
module_param(pc_debug, int, 0600);
#define DEBUGP(n, rdr, x, args...) do { \
if (pc_debug >= (n)) \
dev_printk(KERN_DEBUG, reader_to_dev(rdr), "%s:" x, \
__func__ , ##args); \
#define reader_to_dev(x) (&x->p_dev->dev)
/* n (debug level) is ignored */
/* additional debug output may be enabled by re-compiling with
* CM4040_DEBUG set */
/* #define CM4040_DEBUG */
#define DEBUGP(n, rdr, x, args...) do { \
dev_dbg(reader_to_dev(rdr), "%s:" x, \
__func__ , ## args); \
} while (0)
#else
#define DEBUGP(n, rdr, x, args...)
#endif
static char *version =
"OMNIKEY CardMan 4040 v1.1.0gm5 - All bugs added by Harald Welte";
......@@ -90,14 +86,13 @@ struct reader_dev {
static struct pcmcia_device *dev_table[CM_MAX_DEV];
#ifndef PCMCIA_DEBUG
#ifndef CM4040_DEBUG
#define xoutb outb
#define xinb inb
#else
static inline void xoutb(unsigned char val, unsigned short port)
{
if (pc_debug >= 7)
printk(KERN_DEBUG "outb(val=%.2x,port=%.4x)\n", val, port);
pr_debug("outb(val=%.2x,port=%.4x)\n", val, port);
outb(val, port);
}
......@@ -106,8 +101,7 @@ static inline unsigned char xinb(unsigned short port)
unsigned char val;
val = inb(port);
if (pc_debug >= 7)
printk(KERN_DEBUG "%.2x=inb(%.4x)\n", val, port);
pr_debug("%.2x=inb(%.4x)\n", val, port);
return val;
}
#endif
......@@ -260,23 +254,22 @@ static ssize_t cm4040_read(struct file *filp, char __user *buf,
return -EIO;
}
dev->r_buf[i] = xinb(iobase + REG_OFFSET_BULK_IN);
#ifdef PCMCIA_DEBUG
if (pc_debug >= 6)
printk(KERN_DEBUG "%lu:%2x ", i, dev->r_buf[i]);
#ifdef CM4040_DEBUG
pr_debug("%lu:%2x ", i, dev->r_buf[i]);
}
printk("\n");
pr_debug("\n");
#else
}
#endif
bytes_to_read = 5 + le32_to_cpu(*(__le32 *)&dev->r_buf[1]);
DEBUGP(6, dev, "BytesToRead=%lu\n", bytes_to_read);
DEBUGP(6, dev, "BytesToRead=%zu\n", bytes_to_read);
min_bytes_to_read = min(count, bytes_to_read + 5);
min_bytes_to_read = min_t(size_t, min_bytes_to_read, READ_WRITE_BUFFER_SIZE);
DEBUGP(6, dev, "Min=%lu\n", min_bytes_to_read);
DEBUGP(6, dev, "Min=%zu\n", min_bytes_to_read);
for (i = 0; i < (min_bytes_to_read-5); i++) {
rc = wait_for_bulk_in_ready(dev);
......@@ -288,11 +281,10 @@ static ssize_t cm4040_read(struct file *filp, char __user *buf,
return -EIO;
}
dev->r_buf[i+5] = xinb(iobase + REG_OFFSET_BULK_IN);
#ifdef PCMCIA_DEBUG
if (pc_debug >= 6)
printk(KERN_DEBUG "%lu:%2x ", i, dev->r_buf[i]);
#ifdef CM4040_DEBUG
pr_debug("%lu:%2x ", i, dev->r_buf[i]);
}
printk("\n");
pr_debug("\n");
#else
}
#endif
......@@ -547,7 +539,7 @@ static int cm4040_config_check(struct pcmcia_device *p_dev,
p_dev->io.IOAddrLines = cfg->io.flags & CISTPL_IO_LINES_MASK;
rc = pcmcia_request_io(p_dev, &p_dev->io);
dev_printk(KERN_INFO, &handle_to_dev(p_dev),
dev_printk(KERN_INFO, &p_dev->dev,
"pcmcia_request_io returned 0x%x\n", rc);
return rc;
}
......@@ -569,7 +561,7 @@ static int reader_config(struct pcmcia_device *link, int devno)
fail_rc = pcmcia_request_configuration(link, &link->conf);
if (fail_rc != 0) {
dev_printk(KERN_INFO, &handle_to_dev(link),
dev_printk(KERN_INFO, &link->dev,
"pcmcia_request_configuration failed 0x%x\n",
fail_rc);
goto cs_release;
......
......@@ -1213,12 +1213,12 @@ static irqreturn_t ipwireless_handle_v2_v3_interrupt(int irq,
irqreturn_t ipwireless_interrupt(int irq, void *dev_id)
{
struct ipw_hardware *hw = dev_id;
struct ipw_dev *ipw = dev_id;
if (hw->hw_version == HW_VERSION_1)
return ipwireless_handle_v1_interrupt(irq, hw);
if (ipw->hardware->hw_version == HW_VERSION_1)
return ipwireless_handle_v1_interrupt(irq, ipw->hardware);
else
return ipwireless_handle_v2_v3_interrupt(irq, hw);
return ipwireless_handle_v2_v3_interrupt(irq, ipw->hardware);
}
static void flush_packets_to_hw(struct ipw_hardware *hw)
......
......@@ -554,7 +554,6 @@ static int mgslpc_probe(struct pcmcia_device *link)
/* Interrupt setup */
link->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING;
link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Handler = NULL;
link->conf.Attributes = 0;
......@@ -572,69 +571,51 @@ static int mgslpc_probe(struct pcmcia_device *link)
/* Card has been inserted.
*/
#define CS_CHECK(fn, ret) \
do { last_fn = (fn); if ((last_ret = (ret)) != 0) goto cs_failed; } while (0)
static int mgslpc_ioprobe(struct pcmcia_device *p_dev,
cistpl_cftable_entry_t *cfg,
cistpl_cftable_entry_t *dflt,
unsigned int vcc,
void *priv_data)
{
if (cfg->io.nwin > 0) {
p_dev->io.Attributes1 = IO_DATA_PATH_WIDTH_AUTO;
if (!(cfg->io.flags & CISTPL_IO_8BIT))
p_dev->io.Attributes1 = IO_DATA_PATH_WIDTH_16;
if (!(cfg->io.flags & CISTPL_IO_16BIT))
p_dev->io.Attributes1 = IO_DATA_PATH_WIDTH_8;
p_dev->io.IOAddrLines = cfg->io.flags & CISTPL_IO_LINES_MASK;
p_dev->io.BasePort1 = cfg->io.win[0].base;
p_dev->io.NumPorts1 = cfg->io.win[0].len;
return pcmcia_request_io(p_dev, &p_dev->io);
}
return -ENODEV;
}
static int mgslpc_config(struct pcmcia_device *link)
{
MGSLPC_INFO *info = link->priv;
tuple_t tuple;
cisparse_t parse;
int last_fn, last_ret;
u_char buf[64];
cistpl_cftable_entry_t dflt = { 0 };
cistpl_cftable_entry_t *cfg;
int ret;
if (debug_level >= DEBUG_LEVEL_INFO)
printk("mgslpc_config(0x%p)\n", link);
tuple.Attributes = 0;
tuple.TupleData = buf;
tuple.TupleDataMax = sizeof(buf);
tuple.TupleOffset = 0;
/* get CIS configuration entry */
tuple.DesiredTuple = CISTPL_CFTABLE_ENTRY;
CS_CHECK(GetFirstTuple, pcmcia_get_first_tuple(link, &tuple));
cfg = &(parse.cftable_entry);
CS_CHECK(GetTupleData, pcmcia_get_tuple_data(link, &tuple));
CS_CHECK(ParseTuple, pcmcia_parse_tuple(&tuple, &parse));
if (cfg->flags & CISTPL_CFTABLE_DEFAULT) dflt = *cfg;
if (cfg->index == 0)
goto cs_failed;
link->conf.ConfigIndex = cfg->index;
link->conf.Attributes |= CONF_ENABLE_IRQ;
/* IO window settings */
link->io.NumPorts1 = 0;
if ((cfg->io.nwin > 0) || (dflt.io.nwin > 0)) {
cistpl_io_t *io = (cfg->io.nwin) ? &cfg->io : &dflt.io;
link->io.Attributes1 = IO_DATA_PATH_WIDTH_AUTO;
if (!(io->flags & CISTPL_IO_8BIT))
link->io.Attributes1 = IO_DATA_PATH_WIDTH_16;
if (!(io->flags & CISTPL_IO_16BIT))
link->io.Attributes1 = IO_DATA_PATH_WIDTH_8;
link->io.IOAddrLines = io->flags & CISTPL_IO_LINES_MASK;
link->io.BasePort1 = io->win[0].base;
link->io.NumPorts1 = io->win[0].len;
CS_CHECK(RequestIO, pcmcia_request_io(link, &link->io));
}
ret = pcmcia_loop_config(link, mgslpc_ioprobe, NULL);
if (ret != 0)
goto failed;
link->conf.Attributes = CONF_ENABLE_IRQ;
link->conf.IntType = INT_MEMORY_AND_IO;
link->conf.ConfigIndex = 8;
link->conf.Present = PRESENT_OPTION;
link->irq.Attributes |= IRQ_HANDLE_PRESENT;
link->irq.Handler = mgslpc_isr;
link->irq.Instance = info;
CS_CHECK(RequestIRQ, pcmcia_request_irq(link, &link->irq));
CS_CHECK(RequestConfiguration, pcmcia_request_configuration(link, &link->conf));
ret = pcmcia_request_irq(link, &link->irq);
if (ret)
goto failed;
ret = pcmcia_request_configuration(link, &link->conf);
if (ret)
goto failed;
info->io_base = link->io.BasePort1;
info->irq_level = link->irq.AssignedIRQ;
......@@ -654,8 +635,7 @@ static int mgslpc_config(struct pcmcia_device *link)
printk("\n");
return 0;
cs_failed:
cs_error(link, last_fn, last_ret);
failed:
mgslpc_release((u_long)link);
return -ENODEV;
}
......
......@@ -31,7 +31,7 @@
enum tpm_const {
TPM_MINOR = 224, /* officially assigned */
TPM_BUFSIZE = 2048,
TPM_BUFSIZE = 4096,
TPM_NUM_DEVICES = 256,
};
......
......@@ -257,6 +257,10 @@ static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count)
return size;
}
static int itpm;
module_param(itpm, bool, 0444);
MODULE_PARM_DESC(itpm, "Force iTPM workarounds (found on some Lenovo laptops)");
/*
* If interrupts are used (signaled by an irq set in the vendor structure)
* tpm.c can skip polling for the data to be available as the interrupt is
......@@ -293,7 +297,7 @@ static int tpm_tis_send(struct tpm_chip *chip, u8 *buf, size_t len)
wait_for_stat(chip, TPM_STS_VALID, chip->vendor.timeout_c,
&chip->vendor.int_queue);
status = tpm_tis_status(chip);
if ((status & TPM_STS_DATA_EXPECT) == 0) {
if (!itpm && (status & TPM_STS_DATA_EXPECT) == 0) {
rc = -EIO;
goto out_err;
}
......@@ -467,6 +471,10 @@ static int tpm_tis_init(struct device *dev, resource_size_t start,
"1.2 TPM (device-id 0x%X, rev-id %d)\n",
vendor >> 16, ioread8(chip->vendor.iobase + TPM_RID(0)));
if (itpm)
dev_info(dev, "Intel iTPM workaround enabled\n");
/* Figure out the capabilities */
intfcaps =
ioread32(chip->vendor.iobase +
......@@ -629,6 +637,7 @@ static struct pnp_device_id tpm_pnp_tbl[] __devinitdata = {
{"", 0}, /* User Specified */
{"", 0} /* Terminator */
};
MODULE_DEVICE_TABLE(pnp, tpm_pnp_tbl);
static __devexit void tpm_tis_pnp_remove(struct pnp_dev *dev)
{
......
......@@ -144,13 +144,6 @@ static int lnw_irq_type(unsigned irq, unsigned type)
static void lnw_irq_unmask(unsigned irq)
{
struct lnw_gpio *lnw = get_irq_chip_data(irq);
u32 gpio = irq - lnw->irq_base;
u8 reg = gpio / 32;
void __iomem *gedr;
gedr = (void __iomem *)(&lnw->reg_base->GEDR[reg]);
writel(BIT(gpio % 32), gedr);
};
static void lnw_irq_mask(unsigned irq)
......@@ -183,13 +176,11 @@ static void lnw_irq_handler(unsigned irq, struct irq_desc *desc)
gedr_v = readl(gedr);
if (!gedr_v)
continue;
for (gpio = reg*32; gpio < reg*32+32; gpio++) {
gedr_v = readl(gedr);
for (gpio = reg*32; gpio < reg*32+32; gpio++)
if (gedr_v & BIT(gpio % 32)) {
pr_debug("pin %d triggered\n", gpio);
generic_handle_irq(lnw->irq_base + gpio);
}
}
/* clear the edge detect status bit */
writel(gedr_v, gedr);
}
......
......@@ -60,15 +60,6 @@ MODULE_AUTHOR("David Hinds <dahinds@users.sourceforge.net>");
MODULE_DESCRIPTION("PCMCIA ATA/IDE card driver");
MODULE_LICENSE("Dual MPL/GPL");
#define INT_MODULE_PARM(n, v) static int n = v; module_param(n, int, 0)
#ifdef CONFIG_PCMCIA_DEBUG
INT_MODULE_PARM(pc_debug, 0);
#define DEBUG(n, args...) if (pc_debug>(n)) printk(KERN_DEBUG args)
#else
#define DEBUG(n, args...)
#endif
/*====================================================================*/
typedef struct ide_info_t {
......@@ -98,7 +89,7 @@ static int ide_probe(struct pcmcia_device *link)
{
ide_info_t *info;
DEBUG(0, "ide_attach()\n");
dev_dbg(&link->dev, "ide_attach()\n");
/* Create new ide device */
info = kzalloc(sizeof(*info), GFP_KERNEL);
......@@ -112,7 +103,6 @@ static int ide_probe(struct pcmcia_device *link)
link->io.Attributes2 = IO_DATA_PATH_WIDTH_8;
link->io.IOAddrLines = 3;
link->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING;
link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->conf.Attributes = CONF_ENABLE_IRQ;
link->conf.IntType = INT_MEMORY_AND_IO;
......@@ -134,7 +124,7 @@ static void ide_detach(struct pcmcia_device *link)
ide_hwif_t *hwif = info->host->ports[0];
unsigned long data_addr, ctl_addr;
DEBUG(0, "ide_detach(0x%p)\n", link);
dev_dbg(&link->dev, "ide_detach(0x%p)\n", link);
data_addr = hwif->io_ports.data_addr;
ctl_addr = hwif->io_ports.ctl_addr;
......@@ -217,9 +207,6 @@ static struct ide_host *idecs_register(unsigned long io, unsigned long ctl,
======================================================================*/
#define CS_CHECK(fn, ret) \
do { last_fn = (fn); if ((last_ret = (ret)) != 0) goto cs_failed; } while (0)
struct pcmcia_config_check {
unsigned long ctl_base;
int skip_vcc;
......@@ -282,11 +269,11 @@ static int ide_config(struct pcmcia_device *link)
{
ide_info_t *info = link->priv;
struct pcmcia_config_check *stk = NULL;
int last_ret = 0, last_fn = 0, is_kme = 0;
int ret = 0, is_kme = 0;
unsigned long io_base, ctl_base;
struct ide_host *host;
DEBUG(0, "ide_config(0x%p)\n", link);
dev_dbg(&link->dev, "ide_config(0x%p)\n", link);
is_kme = ((link->manf_id == MANFID_KME) &&
((link->card_id == PRODID_KME_KXLC005_A) ||
......@@ -306,8 +293,12 @@ static int ide_config(struct pcmcia_device *link)
io_base = link->io.BasePort1;
ctl_base = stk->ctl_base;
CS_CHECK(RequestIRQ, pcmcia_request_irq(link, &link->irq));
CS_CHECK(RequestConfiguration, pcmcia_request_configuration(link, &link->conf));
ret = pcmcia_request_irq(link, &link->irq);
if (ret)
goto failed;
ret = pcmcia_request_configuration(link, &link->conf);
if (ret)
goto failed;
/* disable drive interrupts during IDE probe */
outb(0x02, ctl_base);
......@@ -342,8 +333,6 @@ static int ide_config(struct pcmcia_device *link)
printk(KERN_NOTICE "ide-cs: ide_config failed memory allocation\n");
goto failed;
cs_failed:
cs_error(link, last_fn, last_ret);
failed:
kfree(stk);
ide_release(link);
......@@ -363,7 +352,7 @@ static void ide_release(struct pcmcia_device *link)
ide_info_t *info = link->priv;
struct ide_host *host = info->host;
DEBUG(0, "ide_release(0x%p)\n", link);
dev_dbg(&link->dev, "ide_release(0x%p)\n", link);
if (info->ndev)
/* FIXME: if this fails we need to queue the cleanup somehow
......
......@@ -447,6 +447,27 @@ static struct dmi_system_id __initdata i8042_dmi_reset_table[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "N10"),
},
},
{
.ident = "Dell Vostro 1320",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1320"),
},
},
{
.ident = "Dell Vostro 1520",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1520"),
},
},
{
.ident = "Dell Vostro 1720",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1720"),
},
},
{ }
};
......
......@@ -111,8 +111,6 @@ static int avmcs_probe(struct pcmcia_device *p_dev)
p_dev->irq.Attributes = IRQ_TYPE_EXCLUSIVE;
p_dev->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING|IRQ_FIRST_SHARED;
p_dev->irq.IRQInfo1 = IRQ_LEVEL_ID;
/* General socket configuration */
p_dev->conf.Attributes = CONF_ENABLE_IRQ;
p_dev->conf.IntType = INT_MEMORY_AND_IO;
......@@ -198,7 +196,6 @@ static int avmcs_config(struct pcmcia_device *link)
*/
i = pcmcia_request_irq(link, &link->irq);
if (i != 0) {
cs_error(link, RequestIRQ, i);
/* undo */
pcmcia_disable_device(link);
break;
......@@ -209,7 +206,6 @@ static int avmcs_config(struct pcmcia_device *link)
*/
i = pcmcia_request_configuration(link, &link->conf);
if (i != 0) {
cs_error(link, RequestConfiguration, i);
pcmcia_disable_device(link);
break;
}
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -1650,11 +1650,12 @@ static void raid1d(mddev_t *mddev)
r1_bio->sector,
r1_bio->sectors);
unfreeze_array(conf);
}
} else
md_error(mddev,
conf->mirrors[r1_bio->read_disk].rdev);
bio = r1_bio->bios[r1_bio->read_disk];
if ((disk=read_balance(conf, r1_bio)) == -1 ||
disk == r1_bio->read_disk) {
if ((disk=read_balance(conf, r1_bio)) == -1) {
printk(KERN_ALERT "raid1: %s: unrecoverable I/O"
" read error for block %llu\n",
bdevname(bio->bi_bdev,b),
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册