提交 c373ba99 编写于 作者: P Paul Mundt
ARM TCM (Tightly-Coupled Memory) handling in Linux
----
Written by Linus Walleij <linus.walleij@stericsson.com>
Some ARM SoC:s have a so-called TCM (Tightly-Coupled Memory).
This is usually just a few (4-64) KiB of RAM inside the ARM
processor.
Due to being embedded inside the CPU The TCM has a
Harvard-architecture, so there is an ITCM (instruction TCM)
and a DTCM (data TCM). The DTCM can not contain any
instructions, but the ITCM can actually contain data.
The size of DTCM or ITCM is minimum 4KiB so the typical
minimum configuration is 4KiB ITCM and 4KiB DTCM.
ARM CPU:s have special registers to read out status, physical
location and size of TCM memories. arch/arm/include/asm/cputype.h
defines a CPUID_TCM register that you can read out from the
system control coprocessor. Documentation from ARM can be found
at http://infocenter.arm.com, search for "TCM Status Register"
to see documents for all CPUs. Reading this register you can
determine if ITCM (bit 0) and/or DTCM (bit 16) is present in the
machine.
There is further a TCM region register (search for "TCM Region
Registers" at the ARM site) that can report and modify the location
size of TCM memories at runtime. This is used to read out and modify
TCM location and size. Notice that this is not a MMU table: you
actually move the physical location of the TCM around. At the
place you put it, it will mask any underlying RAM from the
CPU so it is usually wise not to overlap any physical RAM with
the TCM. The TCM memory exists totally outside the MMU and will
override any MMU mappings.
Code executing inside the ITCM does not "see" any MMU mappings
and e.g. register accesses must be made to physical addresses.
TCM is used for a few things:
- FIQ and other interrupt handlers that need deterministic
timing and cannot wait for cache misses.
- Idle loops where all external RAM is set to self-refresh
retention mode, so only on-chip RAM is accessible by
the CPU and then we hang inside ITCM waiting for an
interrupt.
- Other operations which implies shutting off or reconfiguring
the external RAM controller.
There is an interface for using TCM on the ARM architecture
in <asm/tcm.h>. Using this interface it is possible to:
- Define the physical address and size of ITCM and DTCM.
- Tag functions to be compiled into ITCM.
- Tag data and constants to be allocated to DTCM and ITCM.
- Have the remaining TCM RAM added to a special
allocation pool with gen_pool_create() and gen_pool_add()
and provice tcm_alloc() and tcm_free() for this
memory. Such a heap is great for things like saving
device state when shutting off device power domains.
A machine that has TCM memory shall select HAVE_TCM in
arch/arm/Kconfig for itself, and then the
rest of the functionality will depend on the physical
location and size of ITCM and DTCM to be defined in
mach/memory.h for the machine. Code that needs to use
TCM shall #include <asm/tcm.h> If the TCM is not located
at the place given in memory.h it will be moved using
the TCM Region registers.
Functions to go into itcm can be tagged like this:
int __tcmfunc foo(int bar);
Variables to go into dtcm can be tagged like this:
int __tcmdata foo;
Constants can be tagged like this:
int __tcmconst foo;
To put assembler into TCM just use
.section ".tcm.text" or .section ".tcm.data"
respectively.
Example code:
#include <asm/tcm.h>
/* Uninitialized data */
static u32 __tcmdata tcmvar;
/* Initialized data */
static u32 __tcmdata tcmassigned = 0x2BADBABEU;
/* Constant */
static const u32 __tcmconst tcmconst = 0xCAFEBABEU;
static void __tcmlocalfunc tcm_to_tcm(void)
{
int i;
for (i = 0; i < 100; i++)
tcmvar ++;
}
static void __tcmfunc hello_tcm(void)
{
/* Some abstract code that runs in ITCM */
int i;
for (i = 0; i < 100; i++) {
tcmvar ++;
}
tcm_to_tcm();
}
static void __init test_tcm(void)
{
u32 *tcmem;
int i;
hello_tcm();
printk("Hello TCM executed from ITCM RAM\n");
printk("TCM variable from testrun: %u @ %p\n", tcmvar, &tcmvar);
tcmvar = 0xDEADBEEFU;
printk("TCM variable: 0x%x @ %p\n", tcmvar, &tcmvar);
printk("TCM assigned variable: 0x%x @ %p\n", tcmassigned, &tcmassigned);
printk("TCM constant: 0x%x @ %p\n", tcmconst, &tcmconst);
/* Allocate some TCM memory from the pool */
tcmem = tcm_alloc(20);
if (tcmem) {
printk("TCM Allocated 20 bytes of TCM @ %p\n", tcmem);
tcmem[0] = 0xDEADBEEFU;
tcmem[1] = 0x2BADBABEU;
tcmem[2] = 0xCAFEBABEU;
tcmem[3] = 0xDEADBEEFU;
tcmem[4] = 0x2BADBABEU;
for (i = 0; i < 5; i++)
printk("TCM tcmem[%d] = %08x\n", i, tcmem[i]);
tcm_free(tcmem, 20);
}
}
......@@ -194,7 +194,6 @@ static void cfag12864b_blit(void)
*/
#include <stdio.h>
#include <string.h>
#define EXAMPLES 6
......
......@@ -408,6 +408,26 @@ You can attach the current shell task by echoing 0:
# echo 0 > tasks
2.3 Mounting hierarchies by name
--------------------------------
Passing the name=<x> option when mounting a cgroups hierarchy
associates the given name with the hierarchy. This can be used when
mounting a pre-existing hierarchy, in order to refer to it by name
rather than by its set of active subsystems. Each hierarchy is either
nameless, or has a unique name.
The name should match [\w.-]+
When passing a name=<x> option for a new hierarchy, you need to
specify subsystems manually; the legacy behaviour of mounting all
subsystems when none are explicitly specified is not supported when
you give a subsystem a name.
The name of the subsystem appears as part of the hierarchy description
in /proc/mounts and /proc/<pid>/cgroups.
3. Kernel API
=============
......@@ -501,7 +521,7 @@ rmdir() will fail with it. From this behavior, pre_destroy() can be
called multiple times against a cgroup.
int can_attach(struct cgroup_subsys *ss, struct cgroup *cgrp,
struct task_struct *task)
struct task_struct *task, bool threadgroup)
(cgroup_mutex held by caller)
Called prior to moving a task into a cgroup; if the subsystem
......@@ -509,14 +529,20 @@ returns an error, this will abort the attach operation. If a NULL
task is passed, then a successful result indicates that *any*
unspecified task can be moved into the cgroup. Note that this isn't
called on a fork. If this method returns 0 (success) then this should
remain valid while the caller holds cgroup_mutex.
remain valid while the caller holds cgroup_mutex. If threadgroup is
true, then a successful result indicates that all threads in the given
thread's threadgroup can be moved together.
void attach(struct cgroup_subsys *ss, struct cgroup *cgrp,
struct cgroup *old_cgrp, struct task_struct *task)
struct cgroup *old_cgrp, struct task_struct *task,
bool threadgroup)
(cgroup_mutex held by caller)
Called after the task has been attached to the cgroup, to allow any
post-attachment activity that requires memory allocations or blocking.
If threadgroup is true, the subsystem should take care of all threads
in the specified thread's threadgroup. Currently does not support any
subsystem that might need the old_cgrp for every thread in the group.
void fork(struct cgroup_subsy *ss, struct task_struct *task)
......
......@@ -179,6 +179,9 @@ The reclaim algorithm has not been modified for cgroups, except that
pages that are selected for reclaiming come from the per cgroup LRU
list.
NOTE: Reclaim does not work for the root cgroup, since we cannot set any
limits on the root cgroup.
2. Locking
The memory controller uses the following hierarchy
......@@ -210,6 +213,7 @@ We can alter the memory limit:
NOTE: We can use a suffix (k, K, m, M, g or G) to indicate values in kilo,
mega or gigabytes.
NOTE: We can write "-1" to reset the *.limit_in_bytes(unlimited).
NOTE: We cannot set limits on the root cgroup any more.
# cat /cgroups/0/memory.limit_in_bytes
4194304
......@@ -375,7 +379,42 @@ cgroups created below it.
NOTE2: This feature can be enabled/disabled per subtree.
7. TODO
7. Soft limits
Soft limits allow for greater sharing of memory. The idea behind soft limits
is to allow control groups to use as much of the memory as needed, provided
a. There is no memory contention
b. They do not exceed their hard limit
When the system detects memory contention or low memory control groups
are pushed back to their soft limits. If the soft limit of each control
group is very high, they are pushed back as much as possible to make
sure that one control group does not starve the others of memory.
Please note that soft limits is a best effort feature, it comes with
no guarantees, but it does its best to make sure that when memory is
heavily contended for, memory is allocated based on the soft limit
hints/setup. Currently soft limit based reclaim is setup such that
it gets invoked from balance_pgdat (kswapd).
7.1 Interface
Soft limits can be setup by using the following commands (in this example we
assume a soft limit of 256 megabytes)
# echo 256M > memory.soft_limit_in_bytes
If we want to change this to 1G, we can at any time use
# echo 1G > memory.soft_limit_in_bytes
NOTE1: Soft limits take effect over a long period of time, since they involve
reclaiming memory for balancing between memory cgroups
NOTE2: It is recommended to set the soft limit always below the hard limit,
otherwise the hard limit will take precedence.
8. TODO
1. Add support for accounting huge pages (as a separate controller)
2. Make per-cgroup scanner reclaim not-shared pages first
......
......@@ -54,20 +54,23 @@ features surfaced as a result:
3.1 General format of the API:
struct dma_async_tx_descriptor *
async_<operation>(<op specific parameters>,
enum async_tx_flags flags,
struct dma_async_tx_descriptor *dependency,
dma_async_tx_callback callback_routine,
void *callback_parameter);
async_<operation>(<op specific parameters>, struct async_submit ctl *submit)
3.2 Supported operations:
memcpy - memory copy between a source and a destination buffer
memset - fill a destination buffer with a byte value
xor - xor a series of source buffers and write the result to a
destination buffer
xor_zero_sum - xor a series of source buffers and set a flag if the
result is zero. The implementation attempts to prevent
writes to memory
memcpy - memory copy between a source and a destination buffer
memset - fill a destination buffer with a byte value
xor - xor a series of source buffers and write the result to a
destination buffer
xor_val - xor a series of source buffers and set a flag if the
result is zero. The implementation attempts to prevent
writes to memory
pq - generate the p+q (raid6 syndrome) from a series of source buffers
pq_val - validate that a p and or q buffer are in sync with a given series of
sources
datap - (raid6_datap_recov) recover a raid6 data block and the p block
from the given sources
2data - (raid6_2data_recov) recover 2 raid6 data blocks from the given
sources
3.3 Descriptor management:
The return value is non-NULL and points to a 'descriptor' when the operation
......@@ -80,8 +83,8 @@ acknowledged by the application before the offload engine driver is allowed to
recycle (or free) the descriptor. A descriptor can be acked by one of the
following methods:
1/ setting the ASYNC_TX_ACK flag if no child operations are to be submitted
2/ setting the ASYNC_TX_DEP_ACK flag to acknowledge the parent
descriptor of a new operation.
2/ submitting an unacknowledged descriptor as a dependency to another
async_tx call will implicitly set the acknowledged state.
3/ calling async_tx_ack() on the descriptor.
3.4 When does the operation execute?
......@@ -119,30 +122,42 @@ of an operation.
Perform a xor->copy->xor operation where each operation depends on the
result from the previous operation:
void complete_xor_copy_xor(void *param)
void callback(void *param)
{
printk("complete\n");
struct completion *cmp = param;
complete(cmp);
}
int run_xor_copy_xor(struct page **xor_srcs,
int xor_src_cnt,
struct page *xor_dest,
size_t xor_len,
struct page *copy_src,
struct page *copy_dest,
size_t copy_len)
void run_xor_copy_xor(struct page **xor_srcs,
int xor_src_cnt,
struct page *xor_dest,
size_t xor_len,
struct page *copy_src,
struct page *copy_dest,
size_t copy_len)
{
struct dma_async_tx_descriptor *tx;
addr_conv_t addr_conv[xor_src_cnt];
struct async_submit_ctl submit;
addr_conv_t addr_conv[NDISKS];
struct completion cmp;
init_async_submit(&submit, ASYNC_TX_XOR_DROP_DST, NULL, NULL, NULL,
addr_conv);
tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, &submit)
tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len,
ASYNC_TX_XOR_DROP_DST, NULL, NULL, NULL);
tx = async_memcpy(copy_dest, copy_src, 0, 0, copy_len,
ASYNC_TX_DEP_ACK, tx, NULL, NULL);
tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len,
ASYNC_TX_XOR_DROP_DST | ASYNC_TX_DEP_ACK | ASYNC_TX_ACK,
tx, complete_xor_copy_xor, NULL);
submit->depend_tx = tx;
tx = async_memcpy(copy_dest, copy_src, 0, 0, copy_len, &submit);
init_completion(&cmp);
init_async_submit(&submit, ASYNC_TX_XOR_DROP_DST | ASYNC_TX_ACK, tx,
callback, &cmp, addr_conv);
tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, &submit);
async_tx_issue_pending_all();
wait_for_completion(&cmp);
}
See include/linux/async_tx.h for more information on the flags. See the
......
......@@ -4,7 +4,7 @@ Shared Subtrees
Contents:
1) Overview
2) Features
3) smount command
3) Setting mount states
4) Use-case
5) Detailed semantics
6) Quiz
......@@ -41,14 +41,14 @@ replicas continue to be exactly same.
Here is an example:
Lets say /mnt has a mount that is shared.
Let's say /mnt has a mount that is shared.
mount --make-shared /mnt
note: mount command does not yet support the --make-shared flag.
I have included a small C program which does the same by executing
'smount /mnt shared'
Note: mount(8) command now supports the --make-shared flag,
so the sample 'smount' program is no longer needed and has been
removed.
#mount --bind /mnt /tmp
# mount --bind /mnt /tmp
The above command replicates the mount at /mnt to the mountpoint /tmp
and the contents of both the mounts remain identical.
......@@ -58,8 +58,8 @@ replicas continue to be exactly same.
#ls /tmp
a b c
Now lets say we mount a device at /tmp/a
#mount /dev/sd0 /tmp/a
Now let's say we mount a device at /tmp/a
# mount /dev/sd0 /tmp/a
#ls /tmp/a
t1 t2 t2
......@@ -80,21 +80,20 @@ replicas continue to be exactly same.
Here is an example:
Lets say /mnt has a mount which is shared.
#mount --make-shared /mnt
Let's say /mnt has a mount which is shared.
# mount --make-shared /mnt
Lets bind mount /mnt to /tmp
#mount --bind /mnt /tmp
Let's bind mount /mnt to /tmp
# mount --bind /mnt /tmp
the new mount at /tmp becomes a shared mount and it is a replica of
the mount at /mnt.
Now lets make the mount at /tmp; a slave of /mnt
#mount --make-slave /tmp
[or smount /tmp slave]
Now let's make the mount at /tmp; a slave of /mnt
# mount --make-slave /tmp
lets mount /dev/sd0 on /mnt/a
#mount /dev/sd0 /mnt/a
let's mount /dev/sd0 on /mnt/a
# mount /dev/sd0 /mnt/a
#ls /mnt/a
t1 t2 t3
......@@ -104,9 +103,9 @@ replicas continue to be exactly same.
Note the mount event has propagated to the mount at /tmp
However lets see what happens if we mount something on the mount at /tmp
However let's see what happens if we mount something on the mount at /tmp
#mount /dev/sd1 /tmp/b
# mount /dev/sd1 /tmp/b
#ls /tmp/b
s1 s2 s3
......@@ -124,12 +123,11 @@ replicas continue to be exactly same.
2d) A unbindable mount is a unbindable private mount
lets say we have a mount at /mnt and we make is unbindable
let's say we have a mount at /mnt and we make is unbindable
#mount --make-unbindable /mnt
[ smount /mnt unbindable ]
# mount --make-unbindable /mnt
Lets try to bind mount this mount somewhere else.
Let's try to bind mount this mount somewhere else.
# mount --bind /mnt /tmp
mount: wrong fs type, bad option, bad superblock on /mnt,
or too many mounted file systems
......@@ -137,149 +135,15 @@ replicas continue to be exactly same.
Binding a unbindable mount is a invalid operation.
3) smount command
3) Setting mount states
Currently the mount command is not aware of shared subtree features.
Work is in progress to add the support in mount ( util-linux package ).
Till then use the following program.
The mount command (util-linux package) can be used to set mount
states:
------------------------------------------------------------------------
//
//this code was developed my Miklos Szeredi <miklos@szeredi.hu>
//and modified by Ram Pai <linuxram@us.ibm.com>
// sample usage:
// smount /tmp shared
//
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/mount.h>
#include <sys/fsuid.h>
#ifndef MS_REC
#define MS_REC 0x4000 /* 16384: Recursive loopback */
#endif
#ifndef MS_SHARED
#define MS_SHARED 1<<20 /* Shared */
#endif
#ifndef MS_PRIVATE
#define MS_PRIVATE 1<<18 /* Private */
#endif
#ifndef MS_SLAVE
#define MS_SLAVE 1<<19 /* Slave */
#endif
#ifndef MS_UNBINDABLE
#define MS_UNBINDABLE 1<<17 /* Unbindable */
#endif
int main(int argc, char *argv[])
{
int type;
if(argc != 3) {
fprintf(stderr, "usage: %s dir "
"<rshared|rslave|rprivate|runbindable|shared|slave"
"|private|unbindable>\n" , argv[0]);
return 1;
}
fprintf(stdout, "%s %s %s\n", argv[0], argv[1], argv[2]);
if (strcmp(argv[2],"rshared")==0)
type=(MS_SHARED|MS_REC);
else if (strcmp(argv[2],"rslave")==0)
type=(MS_SLAVE|MS_REC);
else if (strcmp(argv[2],"rprivate")==0)
type=(MS_PRIVATE|MS_REC);
else if (strcmp(argv[2],"runbindable")==0)
type=(MS_UNBINDABLE|MS_REC);
else if (strcmp(argv[2],"shared")==0)
type=MS_SHARED;
else if (strcmp(argv[2],"slave")==0)
type=MS_SLAVE;
else if (strcmp(argv[2],"private")==0)
type=MS_PRIVATE;
else if (strcmp(argv[2],"unbindable")==0)
type=MS_UNBINDABLE;
else {
fprintf(stderr, "invalid operation: %s\n", argv[2]);
return 1;
}
setfsuid(getuid());
if(mount("", argv[1], "dontcare", type, "") == -1) {
perror("mount");
return 1;
}
return 0;
}
-----------------------------------------------------------------------
Copy the above code snippet into smount.c
gcc -o smount smount.c
(i) To mark all the mounts under /mnt as shared execute the following
command:
smount /mnt rshared
the corresponding syntax planned for mount command is
mount --make-rshared /mnt
just to mark a mount /mnt as shared, execute the following
command:
smount /mnt shared
the corresponding syntax planned for mount command is
mount --make-shared /mnt
(ii) To mark all the shared mounts under /mnt as slave execute the
following
command:
smount /mnt rslave
the corresponding syntax planned for mount command is
mount --make-rslave /mnt
just to mark a mount /mnt as slave, execute the following
command:
smount /mnt slave
the corresponding syntax planned for mount command is
mount --make-slave /mnt
(iii) To mark all the mounts under /mnt as private execute the
following command:
smount /mnt rprivate
the corresponding syntax planned for mount command is
mount --make-rprivate /mnt
just to mark a mount /mnt as private, execute the following
command:
smount /mnt private
the corresponding syntax planned for mount command is
mount --make-private /mnt
NOTE: by default all the mounts are created as private. But if
you want to change some shared/slave/unbindable mount as
private at a later point in time, this command can help.
(iv) To mark all the mounts under /mnt as unbindable execute the
following
command:
smount /mnt runbindable
the corresponding syntax planned for mount command is
mount --make-runbindable /mnt
just to mark a mount /mnt as unbindable, execute the following
command:
smount /mnt unbindable
the corresponding syntax planned for mount command is
mount --make-unbindable /mnt
mount --make-shared mountpoint
mount --make-slave mountpoint
mount --make-private mountpoint
mount --make-unbindable mountpoint
4) Use cases
......@@ -350,7 +214,7 @@ replicas continue to be exactly same.
mount --rbind / /view/v3
mount --rbind / /view/v4
and if /usr has a versioning filesystem mounted, than that
and if /usr has a versioning filesystem mounted, then that
mount appears at /view/v1/usr, /view/v2/usr, /view/v3/usr and
/view/v4/usr too
......@@ -390,7 +254,7 @@ replicas continue to be exactly same.
For example:
mount --make-shared /mnt
mount --bin /mnt /tmp
mount --bind /mnt /tmp
The mount at /mnt and that at /tmp are both shared and belong
to the same peer group. Anything mounted or unmounted under
......@@ -558,7 +422,7 @@ replicas continue to be exactly same.
then the subtree under the unbindable mount is pruned in the new
location.
eg: lets say we have the following mount tree.
eg: let's say we have the following mount tree.
A
/ \
......@@ -566,7 +430,7 @@ replicas continue to be exactly same.
/ \ / \
D E F G
Lets say all the mount except the mount C in the tree are
Let's say all the mount except the mount C in the tree are
of a type other than unbindable.
If this tree is rbound to say Z
......@@ -683,13 +547,13 @@ replicas continue to be exactly same.
'b' on mounts that receive propagation from mount 'B' and does not have
sub-mounts within them are unmounted.
Example: Lets say 'B1', 'B2', 'B3' are shared mounts that propagate to
Example: Let's say 'B1', 'B2', 'B3' are shared mounts that propagate to
each other.
lets say 'A1', 'A2', 'A3' are first mounted at dentry 'b' on mount
let's say 'A1', 'A2', 'A3' are first mounted at dentry 'b' on mount
'B1', 'B2' and 'B3' respectively.
lets say 'C1', 'C2', 'C3' are next mounted at the same dentry 'b' on
let's say 'C1', 'C2', 'C3' are next mounted at the same dentry 'b' on
mount 'B1', 'B2' and 'B3' respectively.
if 'C1' is unmounted, all the mounts that are most-recently-mounted on
......@@ -710,7 +574,7 @@ replicas continue to be exactly same.
A cloned namespace contains all the mounts as that of the parent
namespace.
Lets say 'A' and 'B' are the corresponding mounts in the parent and the
Let's say 'A' and 'B' are the corresponding mounts in the parent and the
child namespace.
If 'A' is shared, then 'B' is also shared and 'A' and 'B' propagate to
......@@ -759,11 +623,11 @@ replicas continue to be exactly same.
mount --make-slave /mnt
At this point we have the first mount at /tmp and
its root dentry is 1. Lets call this mount 'A'
its root dentry is 1. Let's call this mount 'A'
And then we have a second mount at /tmp1 with root
dentry 2. Lets call this mount 'B'
dentry 2. Let's call this mount 'B'
Next we have a third mount at /mnt with root dentry
mnt. Lets call this mount 'C'
mnt. Let's call this mount 'C'
'B' is the slave of 'A' and 'C' is a slave of 'B'
A -> B -> C
......@@ -794,7 +658,7 @@ replicas continue to be exactly same.
Q3 Why is unbindable mount needed?
Lets say we want to replicate the mount tree at multiple
Let's say we want to replicate the mount tree at multiple
locations within the same subtree.
if one rbind mounts a tree within the same subtree 'n' times
......@@ -803,7 +667,7 @@ replicas continue to be exactly same.
mounts. Here is a example.
step 1:
lets say the root tree has just two directories with
let's say the root tree has just two directories with
one vfsmount.
root
/ \
......@@ -875,7 +739,7 @@ replicas continue to be exactly same.
Unclonable mounts come in handy here.
step 1:
lets say the root tree has just two directories with
let's say the root tree has just two directories with
one vfsmount.
root
/ \
......
......@@ -536,6 +536,7 @@ struct address_space_operations {
/* migrate the contents of a page to the specified target */
int (*migratepage) (struct page *, struct page *);
int (*launder_page) (struct page *);
int (*error_remove_page) (struct mapping *mapping, struct page *page);
};
writepage: called by the VM to write a dirty page to backing store.
......@@ -694,6 +695,12 @@ struct address_space_operations {
prevent redirtying the page, it is kept locked during the whole
operation.
error_remove_page: normally set to generic_error_remove_page if truncation
is ok for this address space. Used for memory failure handling.
Setting this implies you deal with pages going away under you,
unless you have them locked or reference counts increased.
The File Object
===============
......
......@@ -135,6 +135,7 @@ Code Seq# Include File Comments
<http://mikonos.dia.unisa.it/tcfs>
'l' 40-7F linux/udf_fs_i.h in development:
<http://sourceforge.net/projects/linux-udf/>
'm' 00-09 linux/mmtimer.h
'm' all linux/mtio.h conflict!
'm' all linux/soundcard.h conflict!
'm' all linux/synclink.h conflict!
......
......@@ -96,13 +96,16 @@ handles that the Linux kernel will allocate. When you get lots
of error messages about running out of file handles, you might
want to increase this limit.
The three values in file-nr denote the number of allocated
file handles, the number of unused file handles and the maximum
number of file handles. When the allocated file handles come
close to the maximum, but the number of unused file handles is
significantly greater than 0, you've encountered a peak in your
usage of file handles and you don't need to increase the maximum.
Historically, the three values in file-nr denoted the number of
allocated file handles, the number of allocated but unused file
handles, and the maximum number of file handles. Linux 2.6 always
reports 0 as the number of free file handles -- this is not an
error, it just means that the number of allocated file handles
exactly matches the number of used file handles.
Attempts to allocate more file descriptors than file-max are
reported with printk, look for "VFS: file-max limit <number>
reached".
==============================================================
nr_open:
......
......@@ -22,6 +22,7 @@ show up in /proc/sys/kernel:
- callhome [ S390 only ]
- auto_msgmni
- core_pattern
- core_pipe_limit
- core_uses_pid
- ctrl-alt-del
- dentry-state
......@@ -135,6 +136,27 @@ core_pattern is used to specify a core dumpfile pattern name.
==============================================================
core_pipe_limit:
This sysctl is only applicable when core_pattern is configured to pipe core
files to user space helper a (when the first character of core_pattern is a '|',
see above). When collecting cores via a pipe to an application, it is
occasionally usefull for the collecting application to gather data about the
crashing process from its /proc/pid directory. In order to do this safely, the
kernel must wait for the collecting process to exit, so as not to remove the
crashing processes proc files prematurely. This in turn creates the possibility
that a misbehaving userspace collecting process can block the reaping of a
crashed process simply by never exiting. This sysctl defends against that. It
defines how many concurrent crashing processes may be piped to user space
applications in parallel. If this value is exceeded, then those crashing
processes above that value are noted via the kernel log and their cores are
skipped. 0 is a special value, indicating that unlimited processes may be
captured in parallel, but that no waiting will take place (i.e. the collecting
process is not guaranteed access to /proc/<crahing pid>/). This value defaults
to 0.
==============================================================
core_uses_pid:
The default coredump filename is "core". By setting
......
......@@ -32,6 +32,8 @@ Currently, these files are in /proc/sys/vm:
- legacy_va_layout
- lowmem_reserve_ratio
- max_map_count
- memory_failure_early_kill
- memory_failure_recovery
- min_free_kbytes
- min_slab_ratio
- min_unmapped_ratio
......@@ -53,7 +55,6 @@ Currently, these files are in /proc/sys/vm:
- vfs_cache_pressure
- zone_reclaim_mode
==============================================================
block_dump
......@@ -275,6 +276,44 @@ e.g., up to one or two maps per allocation.
The default value is 65536.
=============================================================
memory_failure_early_kill:
Control how to kill processes when uncorrected memory error (typically
a 2bit error in a memory module) is detected in the background by hardware
that cannot be handled by the kernel. In some cases (like the page
still having a valid copy on disk) the kernel will handle the failure
transparently without affecting any applications. But if there is
no other uptodate copy of the data it will kill to prevent any data
corruptions from propagating.
1: Kill all processes that have the corrupted and not reloadable page mapped
as soon as the corruption is detected. Note this is not supported
for a few types of pages, like kernel internally allocated data or
the swap cache, but works for the majority of user pages.
0: Only unmap the corrupted page from all processes and only kill a process
who tries to access it.
The kill is done using a catchable SIGBUS with BUS_MCEERR_AO, so processes can
handle this if they want to.
This is only active on architectures/platforms with advanced machine
check handling and depends on the hardware capabilities.
Applications can override this setting individually with the PR_MCE_KILL prctl
==============================================================
memory_failure_recovery
Enable memory failure recovery (when supported by the platform)
1: Attempt recovery.
0: Always panic on a memory failure.
==============================================================
min_free_kbytes:
......
......@@ -80,7 +80,7 @@ Note: PTL can also be used to guarantee that no new clones using the
mm start up ... this is a loose form of stability on mm_users. For
example, it is used in copy_mm to protect against a racing tlb_gather_mmu
single address space optimization, so that the zap_page_range (from
vmtruncate) does not lose sending ipi's to cloned threads that might
truncate) does not lose sending ipi's to cloned threads that might
be spawned underneath it and go to user mode to drag in pte's into tlbs.
swap_lock
......
......@@ -5,6 +5,7 @@
* Copyright (C) 2009 Wu Fengguang <fengguang.wu@intel.com>
*/
#define _LARGEFILE64_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
......@@ -13,11 +14,32 @@
#include <string.h>
#include <getopt.h>
#include <limits.h>
#include <assert.h>
#include <sys/types.h>
#include <sys/errno.h>
#include <sys/fcntl.h>
/*
* pagemap kernel ABI bits
*/
#define PM_ENTRY_BYTES sizeof(uint64_t)
#define PM_STATUS_BITS 3
#define PM_STATUS_OFFSET (64 - PM_STATUS_BITS)
#define PM_STATUS_MASK (((1LL << PM_STATUS_BITS) - 1) << PM_STATUS_OFFSET)
#define PM_STATUS(nr) (((nr) << PM_STATUS_OFFSET) & PM_STATUS_MASK)
#define PM_PSHIFT_BITS 6
#define PM_PSHIFT_OFFSET (PM_STATUS_OFFSET - PM_PSHIFT_BITS)
#define PM_PSHIFT_MASK (((1LL << PM_PSHIFT_BITS) - 1) << PM_PSHIFT_OFFSET)
#define PM_PSHIFT(x) (((u64) (x) << PM_PSHIFT_OFFSET) & PM_PSHIFT_MASK)
#define PM_PFRAME_MASK ((1LL << PM_PSHIFT_OFFSET) - 1)
#define PM_PFRAME(x) ((x) & PM_PFRAME_MASK)
#define PM_PRESENT PM_STATUS(4LL)
#define PM_SWAP PM_STATUS(2LL)
/*
* kernel page flags
*/
......@@ -126,6 +148,14 @@ static int nr_addr_ranges;
static unsigned long opt_offset[MAX_ADDR_RANGES];
static unsigned long opt_size[MAX_ADDR_RANGES];
#define MAX_VMAS 10240
static int nr_vmas;
static unsigned long pg_start[MAX_VMAS];
static unsigned long pg_end[MAX_VMAS];
static unsigned long voffset;
static int pagemap_fd;
#define MAX_BIT_FILTERS 64
static int nr_bit_filters;
static uint64_t opt_mask[MAX_BIT_FILTERS];
......@@ -135,7 +165,6 @@ static int page_size;
#define PAGES_BATCH (64 << 10) /* 64k pages */
static int kpageflags_fd;
static uint64_t kpageflags_buf[KPF_BYTES * PAGES_BATCH];
#define HASH_SHIFT 13
#define HASH_SIZE (1 << HASH_SHIFT)
......@@ -158,6 +187,11 @@ static uint64_t page_flags[HASH_SIZE];
type __min2 = (y); \
__min1 < __min2 ? __min1 : __min2; })
#define max_t(type, x, y) ({ \
type __max1 = (x); \
type __max2 = (y); \
__max1 > __max2 ? __max1 : __max2; })
static unsigned long pages2mb(unsigned long pages)
{
return (pages * page_size) >> 20;
......@@ -224,26 +258,34 @@ static char *page_flag_longname(uint64_t flags)
static void show_page_range(unsigned long offset, uint64_t flags)
{
static uint64_t flags0;
static unsigned long voff;
static unsigned long index;
static unsigned long count;
if (flags == flags0 && offset == index + count) {
if (flags == flags0 && offset == index + count &&
(!opt_pid || voffset == voff + count)) {
count++;
return;
}
if (count)
printf("%lu\t%lu\t%s\n",
if (count) {
if (opt_pid)
printf("%lx\t", voff);
printf("%lx\t%lx\t%s\n",
index, count, page_flag_name(flags0));
}
flags0 = flags;
index = offset;
voff = voffset;
count = 1;
}
static void show_page(unsigned long offset, uint64_t flags)
{
printf("%lu\t%s\n", offset, page_flag_name(flags));
if (opt_pid)
printf("%lx\t", voffset);
printf("%lx\t%s\n", offset, page_flag_name(flags));
}
static void show_summary(void)
......@@ -383,6 +425,8 @@ static void walk_pfn(unsigned long index, unsigned long count)
lseek(kpageflags_fd, index * KPF_BYTES, SEEK_SET);
while (count) {
uint64_t kpageflags_buf[KPF_BYTES * PAGES_BATCH];
batch = min_t(unsigned long, count, PAGES_BATCH);
n = read(kpageflags_fd, kpageflags_buf, batch * KPF_BYTES);
if (n == 0)
......@@ -404,6 +448,81 @@ static void walk_pfn(unsigned long index, unsigned long count)
}
}
#define PAGEMAP_BATCH 4096
static unsigned long task_pfn(unsigned long pgoff)
{
static uint64_t buf[PAGEMAP_BATCH];
static unsigned long start;
static long count;
uint64_t pfn;
if (pgoff < start || pgoff >= start + count) {
if (lseek64(pagemap_fd,
(uint64_t)pgoff * PM_ENTRY_BYTES,
SEEK_SET) < 0) {
perror("pagemap seek");
exit(EXIT_FAILURE);
}
count = read(pagemap_fd, buf, sizeof(buf));
if (count == 0)
return 0;
if (count < 0) {
perror("pagemap read");
exit(EXIT_FAILURE);
}
if (count % PM_ENTRY_BYTES) {
fatal("pagemap read not aligned.\n");
exit(EXIT_FAILURE);
}
count /= PM_ENTRY_BYTES;
start = pgoff;
}
pfn = buf[pgoff - start];
if (pfn & PM_PRESENT)
pfn = PM_PFRAME(pfn);
else
pfn = 0;
return pfn;
}
static void walk_task(unsigned long index, unsigned long count)
{
int i = 0;
const unsigned long end = index + count;
while (index < end) {
while (pg_end[i] <= index)
if (++i >= nr_vmas)
return;
if (pg_start[i] >= end)
return;
voffset = max_t(unsigned long, pg_start[i], index);
index = min_t(unsigned long, pg_end[i], end);
assert(voffset < index);
for (; voffset < index; voffset++) {
unsigned long pfn = task_pfn(voffset);
if (pfn)
walk_pfn(pfn, 1);
}
}
}
static void add_addr_range(unsigned long offset, unsigned long size)
{
if (nr_addr_ranges >= MAX_ADDR_RANGES)
fatal("too many addr ranges\n");
opt_offset[nr_addr_ranges] = offset;
opt_size[nr_addr_ranges] = min_t(unsigned long, size, ULONG_MAX-offset);
nr_addr_ranges++;
}
static void walk_addr_ranges(void)
{
int i;
......@@ -415,10 +534,13 @@ static void walk_addr_ranges(void)
}
if (!nr_addr_ranges)
walk_pfn(0, ULONG_MAX);
add_addr_range(0, ULONG_MAX);
for (i = 0; i < nr_addr_ranges; i++)
walk_pfn(opt_offset[i], opt_size[i]);
if (!opt_pid)
walk_pfn(opt_offset[i], opt_size[i]);
else
walk_task(opt_offset[i], opt_size[i]);
close(kpageflags_fd);
}
......@@ -446,8 +568,8 @@ static void usage(void)
" -r|--raw Raw mode, for kernel developers\n"
" -a|--addr addr-spec Walk a range of pages\n"
" -b|--bits bits-spec Walk pages with specified bits\n"
#if 0 /* planned features */
" -p|--pid pid Walk process address space\n"
#if 0 /* planned features */
" -f|--file filename Walk file address space\n"
#endif
" -l|--list Show page details in ranges\n"
......@@ -459,7 +581,7 @@ static void usage(void)
" N+M pages range from N to N+M-1\n"
" N,M pages range from N to M-1\n"
" N, pages range from N to end\n"
" ,M pages range from 0 to M\n"
" ,M pages range from 0 to M-1\n"
"bits-spec:\n"
" bit1,bit2 (flags & (bit1|bit2)) != 0\n"
" bit1,bit2=bit1 (flags & (bit1|bit2)) == bit1\n"
......@@ -496,21 +618,57 @@ static unsigned long long parse_number(const char *str)
static void parse_pid(const char *str)
{
FILE *file;
char buf[5000];
opt_pid = parse_number(str);
}
static void parse_file(const char *name)
{
sprintf(buf, "/proc/%d/pagemap", opt_pid);
pagemap_fd = open(buf, O_RDONLY);
if (pagemap_fd < 0) {
perror(buf);
exit(EXIT_FAILURE);
}
sprintf(buf, "/proc/%d/maps", opt_pid);
file = fopen(buf, "r");
if (!file) {
perror(buf);
exit(EXIT_FAILURE);
}
while (fgets(buf, sizeof(buf), file) != NULL) {
unsigned long vm_start;
unsigned long vm_end;
unsigned long long pgoff;
int major, minor;
char r, w, x, s;
unsigned long ino;
int n;
n = sscanf(buf, "%lx-%lx %c%c%c%c %llx %x:%x %lu",
&vm_start,
&vm_end,
&r, &w, &x, &s,
&pgoff,
&major, &minor,
&ino);
if (n < 10) {
fprintf(stderr, "unexpected line: %s\n", buf);
continue;
}
pg_start[nr_vmas] = vm_start / page_size;
pg_end[nr_vmas] = vm_end / page_size;
if (++nr_vmas >= MAX_VMAS) {
fprintf(stderr, "too many VMAs\n");
break;
}
}
fclose(file);
}
static void add_addr_range(unsigned long offset, unsigned long size)
static void parse_file(const char *name)
{
if (nr_addr_ranges >= MAX_ADDR_RANGES)
fatal("too much addr ranges\n");
opt_offset[nr_addr_ranges] = offset;
opt_size[nr_addr_ranges] = size;
nr_addr_ranges++;
}
static void parse_addr_range(const char *optarg)
......@@ -676,8 +834,10 @@ int main(int argc, char *argv[])
}
}
if (opt_list && opt_pid)
printf("voffset\t");
if (opt_list == 1)
printf("offset\tcount\tflags\n");
printf("offset\tlen\tflags\n");
if (opt_list == 2)
printf("offset\tflags\n");
......
......@@ -257,12 +257,6 @@ W: http://www.lesswatts.org/projects/acpi/
S: Supported
F: drivers/acpi/fan.c
ACPI PCI HOTPLUG DRIVER
M: Kristen Carlson Accardi <kristen.c.accardi@intel.com>
L: linux-pci@vger.kernel.org
S: Supported
F: drivers/pci/hotplug/acpi*
ACPI THERMAL DRIVER
M: Zhang Rui <rui.zhang@intel.com>
L: linux-acpi@vger.kernel.org
......@@ -689,7 +683,7 @@ S: Maintained
ARM/INTEL IXP4XX ARM ARCHITECTURE
M: Imre Kaloz <kaloz@openwrt.org>
M: Krzysztof Halasa <khc@pm.waw.pl>
L: linux-arm-kernel@lists.infradead.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-ixp4xx/
......@@ -746,18 +740,22 @@ M: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
M: Dirk Opfer <dirk@opfer-online.de>
S: Maintained
ARM/PALMTX,PALMT5,PALMLD,PALMTE2 SUPPORT
M: Marek Vasut <marek.vasut@gmail.com>
ARM/PALMTX,PALMT5,PALMLD,PALMTE2,PALMTC SUPPORT
P: Marek Vasut
M: marek.vasut@gmail.com
L: linux-arm-kernel@lists.infradead.org
W: http://hackndev.com
S: Maintained
ARM/PALM TREO 680 SUPPORT
M: Tomas Cech <sleep_walker@suse.cz>
L: linux-arm-kernel@lists.infradead.org
W: http://hackndev.com
S: Maintained
ARM/PALMZ72 SUPPORT
M: Sergey Lapin <slapin@ossfans.org>
L: linux-arm-kernel@lists.infradead.org
W: http://hackndev.com
S: Maintained
......@@ -2331,7 +2329,9 @@ S: Orphan
F: drivers/hwmon/
HARDWARE RANDOM NUMBER GENERATOR CORE
S: Orphan
M: Matt Mackall <mpm@selenic.com>
M: Herbert Xu <herbert@gondor.apana.org.au>
S: Odd fixes
F: Documentation/hw_random.txt
F: drivers/char/hw_random/
F: include/linux/hw_random.h
......@@ -4003,11 +4003,11 @@ F: Documentation/PCI/
F: drivers/pci/
F: include/linux/pci*
PCIE HOTPLUG DRIVER
M: Kristen Carlson Accardi <kristen.c.accardi@intel.com>
PCI HOTPLUG
M: Jesse Barnes <jbarnes@virtuousgeek.org>
L: linux-pci@vger.kernel.org
S: Supported
F: drivers/pci/pcie/
F: drivers/pci/hotplug
PCMCIA SUBSYSTEM
P: Linux PCMCIA Team
......@@ -4670,12 +4670,6 @@ F: drivers/serial/serial_lh7a40x.c
F: drivers/usb/gadget/lh7a40*
F: drivers/usb/host/ohci-lh7a40*
SHPC HOTPLUG DRIVER
M: Kristen Carlson Accardi <kristen.c.accardi@intel.com>
L: linux-pci@vger.kernel.org
S: Supported
F: drivers/pci/hotplug/shpchp*
SIMPLE FIRMWARE INTERFACE (SFI)
P: Len Brown
M: lenb@kernel.org
......@@ -4687,7 +4681,6 @@ F: arch/x86/kernel/*sfi*
F: drivers/sfi/
F: include/linux/sfi*.h
SIMTEC EB110ATX (Chalice CATS)
P: Ben Dooks
M: Vincent Sanders <support@simtec.co.uk>
......
......@@ -26,6 +26,8 @@
#define F_GETOWN 6 /* for sockets. */
#define F_SETSIG 10 /* for sockets. */
#define F_GETSIG 11 /* for sockets. */
#define F_SETOWN_EX 12
#define F_GETOWN_EX 13
/* for posix fcntl() and lockf() */
#define F_RDLCK 1
......
......@@ -1016,7 +1016,7 @@ marvel_agp_bind_memory(alpha_agp_info *agp, off_t pg_start, struct agp_memory *m
{
struct marvel_agp_aperture *aper = agp->aperture.sysdata;
return iommu_bind(aper->arena, aper->pg_start + pg_start,
mem->page_count, mem->memory);
mem->page_count, mem->pages);
}
static int
......
......@@ -680,7 +680,7 @@ titan_agp_bind_memory(alpha_agp_info *agp, off_t pg_start, struct agp_memory *me
{
struct titan_agp_aperture *aper = agp->aperture.sysdata;
return iommu_bind(aper->arena, aper->pg_start + pg_start,
mem->page_count, mem->memory);
mem->page_count, mem->pages);
}
static int
......
......@@ -13,6 +13,5 @@ static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
struct task_struct init_task = INIT_TASK(init_task);
EXPORT_SYMBOL(init_task);
union thread_union init_thread_union
__attribute__((section(".data.init_thread")))
= { INIT_THREAD_INFO(init_task) };
union thread_union init_thread_union __init_task_data =
{ INIT_THREAD_INFO(init_task) };
......@@ -198,7 +198,7 @@ extern unsigned long size_for_memory(unsigned long max);
extern int iommu_reserve(struct pci_iommu_arena *, long, long);
extern int iommu_release(struct pci_iommu_arena *, long, long);
extern int iommu_bind(struct pci_iommu_arena *, long, long, unsigned long *);
extern int iommu_bind(struct pci_iommu_arena *, long, long, struct page **);
extern int iommu_unbind(struct pci_iommu_arena *, long, long);
......@@ -876,7 +876,7 @@ iommu_release(struct pci_iommu_arena *arena, long pg_start, long pg_count)
int
iommu_bind(struct pci_iommu_arena *arena, long pg_start, long pg_count,
unsigned long *physaddrs)
struct page **pages)
{
unsigned long flags;
unsigned long *ptes;
......@@ -896,7 +896,7 @@ iommu_bind(struct pci_iommu_arena *arena, long pg_start, long pg_count,
}
for(i = 0, j = pg_start; i < pg_count; i++, j++)
ptes[j] = mk_iommu_pte(physaddrs[i]);
ptes[j] = mk_iommu_pte(page_to_phys(pages[i]));
spin_unlock_irqrestore(&arena->lock, flags);
......
#include <asm-generic/vmlinux.lds.h>
#include <asm/page.h>
#include <asm/thread_info.h>
OUTPUT_FORMAT("elf64-alpha")
OUTPUT_ARCH(alpha)
......@@ -31,88 +32,21 @@ SECTIONS
} :kernel
RODATA
/* Exception table */
. = ALIGN(16);
__ex_table : {
__start___ex_table = .;
*(__ex_table)
__stop___ex_table = .;
}
EXCEPTION_TABLE(16)
/* Will be freed after init */
. = ALIGN(PAGE_SIZE);
/* Init code and data */
__init_begin = .;
.init.text : {
_sinittext = .;
INIT_TEXT
_einittext = .;
}
.init.data : {
INIT_DATA
}
. = ALIGN(16);
.init.setup : {
__setup_start = .;
*(.init.setup)
__setup_end = .;
}
. = ALIGN(8);
.initcall.init : {
__initcall_start = .;
INITCALLS
__initcall_end = .;
}
#ifdef CONFIG_BLK_DEV_INITRD
. = ALIGN(PAGE_SIZE);
.init.ramfs : {
__initramfs_start = .;
*(.init.ramfs)
__initramfs_end = .;
}
#endif
. = ALIGN(8);
.con_initcall.init : {
__con_initcall_start = .;
*(.con_initcall.init)
__con_initcall_end = .;
}
. = ALIGN(8);
SECURITY_INIT
__init_begin = ALIGN(PAGE_SIZE);
INIT_TEXT_SECTION(PAGE_SIZE)
INIT_DATA_SECTION(16)
PERCPU(PAGE_SIZE)
. = ALIGN(2 * PAGE_SIZE);
/* Align to THREAD_SIZE rather than PAGE_SIZE here so any padding page
needed for the THREAD_SIZE aligned init_task gets freed after init */
. = ALIGN(THREAD_SIZE);
__init_end = .;
/* Freed after init ends here */
/* Note 2 page alignment above. */
.data.init_thread : {
*(.data.init_thread)
}
. = ALIGN(PAGE_SIZE);
.data.page_aligned : {
*(.data.page_aligned)
}
. = ALIGN(64);
.data.cacheline_aligned : {
*(.data.cacheline_aligned)
}
_data = .;
/* Data */
.data : {
DATA_DATA
CONSTRUCTORS
}
RW_DATA_SECTION(64, PAGE_SIZE, THREAD_SIZE)
.got : {
*(.got)
......@@ -122,16 +56,7 @@ SECTIONS
}
_edata = .; /* End of data section */
__bss_start = .;
.sbss : {
*(.sbss)
*(.scommon)
}
.bss : {
*(.bss)
*(COMMON)
}
__bss_stop = .;
BSS_SECTION(0, 0, 0)
_end = .;
.mdebug 0 : {
......
......@@ -46,6 +46,10 @@ config GENERIC_CLOCKEVENTS_BROADCAST
depends on GENERIC_CLOCKEVENTS
default y if SMP && !LOCAL_TIMERS
config HAVE_TCM
bool
select GENERIC_ALLOCATOR
config NO_IOPORT
bool
......@@ -649,6 +653,7 @@ config ARCH_U300
bool "ST-Ericsson U300 Series"
depends on MMU
select CPU_ARM926T
select HAVE_TCM
select ARM_AMBA
select ARM_VIC
select GENERIC_TIME
......
......@@ -865,6 +865,7 @@ void locomo_gpio_set_dir(struct device *dev, unsigned int bits, unsigned int dir
spin_unlock_irqrestore(&lchip->lock, flags);
}
EXPORT_SYMBOL(locomo_gpio_set_dir);
int locomo_gpio_read_level(struct device *dev, unsigned int bits)
{
......@@ -882,6 +883,7 @@ int locomo_gpio_read_level(struct device *dev, unsigned int bits)
ret &= bits;
return ret;
}
EXPORT_SYMBOL(locomo_gpio_read_level);
int locomo_gpio_read_output(struct device *dev, unsigned int bits)
{
......@@ -899,6 +901,7 @@ int locomo_gpio_read_output(struct device *dev, unsigned int bits)
ret &= bits;
return ret;
}
EXPORT_SYMBOL(locomo_gpio_read_output);
void locomo_gpio_write(struct device *dev, unsigned int bits, unsigned int set)
{
......@@ -920,6 +923,7 @@ void locomo_gpio_write(struct device *dev, unsigned int bits, unsigned int set)
spin_unlock_irqrestore(&lchip->lock, flags);
}
EXPORT_SYMBOL(locomo_gpio_write);
static void locomo_m62332_sendbit(void *mapbase, int bit)
{
......@@ -1084,13 +1088,12 @@ void locomo_m62332_senddata(struct locomo_dev *ldev, unsigned int dac_data, int
spin_unlock_irqrestore(&lchip->lock, flags);
}
EXPORT_SYMBOL(locomo_m62332_senddata);
/*
* Frontlight control
*/
static struct locomo *locomo_chip_driver(struct locomo_dev *ldev);
void locomo_frontlight_set(struct locomo_dev *dev, int duty, int vr, int bpwf)
{
unsigned long flags;
......@@ -1182,11 +1185,13 @@ int locomo_driver_register(struct locomo_driver *driver)
driver->drv.bus = &locomo_bus_type;
return driver_register(&driver->drv);
}
EXPORT_SYMBOL(locomo_driver_register);
void locomo_driver_unregister(struct locomo_driver *driver)
{
driver_unregister(&driver->drv);
}
EXPORT_SYMBOL(locomo_driver_unregister);
static int __init locomo_init(void)
{
......@@ -1208,11 +1213,3 @@ module_exit(locomo_exit);
MODULE_DESCRIPTION("Sharp LoCoMo core driver");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("John Lenz <lenz@cs.wisc.edu>");
EXPORT_SYMBOL(locomo_driver_register);
EXPORT_SYMBOL(locomo_driver_unregister);
EXPORT_SYMBOL(locomo_gpio_set_dir);
EXPORT_SYMBOL(locomo_gpio_read_level);
EXPORT_SYMBOL(locomo_gpio_read_output);
EXPORT_SYMBOL(locomo_gpio_write);
EXPORT_SYMBOL(locomo_m62332_senddata);
......@@ -22,6 +22,7 @@
#include <linux/list.h>
#include <linux/io.h>
#include <linux/sysdev.h>
#include <linux/device.h>
#include <linux/amba/bus.h>
#include <asm/mach/irq.h>
......
......@@ -19,31 +19,21 @@
#ifdef __KERNEL__
/*
* On ARM, ordinary assignment (str instruction) doesn't clear the local
* strex/ldrex monitor on some implementations. The reason we can use it for
* atomic_set() is the clrex or dummy strex done on every exception return.
*/
#define atomic_read(v) ((v)->counter)
#define atomic_set(v,i) (((v)->counter) = (i))
#if __LINUX_ARM_ARCH__ >= 6
/*
* ARMv6 UP and SMP safe atomic ops. We use load exclusive and
* store exclusive to ensure that these are atomic. We may loop
* to ensure that the update happens. Writing to 'v->counter'
* without using the following operations WILL break the atomic
* nature of these ops.
* to ensure that the update happens.
*/
static inline void atomic_set(atomic_t *v, int i)
{
unsigned long tmp;
__asm__ __volatile__("@ atomic_set\n"
"1: ldrex %0, [%1]\n"
" strex %0, %2, [%1]\n"
" teq %0, #0\n"
" bne 1b"
: "=&r" (tmp)
: "r" (&v->counter), "r" (i)
: "cc");
}
static inline void atomic_add(int i, atomic_t *v)
{
unsigned long tmp;
......@@ -163,8 +153,6 @@ static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr)
#error SMP not supported on pre-ARMv6 CPUs
#endif
#define atomic_set(v,i) (((v)->counter) = (i))
static inline int atomic_add_return(int i, atomic_t *v)
{
unsigned long flags;
......
......@@ -4,7 +4,7 @@
#ifndef __ASMARM_CACHE_H
#define __ASMARM_CACHE_H
#define L1_CACHE_SHIFT 5
#define L1_CACHE_SHIFT CONFIG_ARM_L1_CACHE_SHIFT
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
/*
......
......@@ -63,6 +63,11 @@ static inline unsigned int __attribute_const__ read_cpuid_cachetype(void)
return read_cpuid(CPUID_CACHETYPE);
}
static inline unsigned int __attribute_const__ read_cpuid_tcmstatus(void)
{
return read_cpuid(CPUID_TCM);
}
/*
* Intel's XScale3 core supports some v6 features (supersections, L2)
* but advertises itself as v5 as it does not support the v6 ISA. For
......@@ -73,7 +78,10 @@ static inline unsigned int __attribute_const__ read_cpuid_cachetype(void)
#else
static inline int cpu_is_xsc3(void)
{
if ((read_cpuid_id() & 0xffffe000) == 0x69056000)
unsigned int id;
id = read_cpuid_id() & 0xffffe000;
/* It covers both Intel ID and Marvell ID */
if ((id == 0x69056000) || (id == 0x56056000))
return 1;
return 0;
......
......@@ -187,11 +187,74 @@ union iop3xx_desc {
void *ptr;
};
/* No support for p+q operations */
static inline int
iop_chan_pq_slot_count(size_t len, int src_cnt, int *slots_per_op)
{
BUG();
return 0;
}
static inline void
iop_desc_init_pq(struct iop_adma_desc_slot *desc, int src_cnt,
unsigned long flags)
{
BUG();
}
static inline void
iop_desc_set_pq_addr(struct iop_adma_desc_slot *desc, dma_addr_t *addr)
{
BUG();
}
static inline void
iop_desc_set_pq_src_addr(struct iop_adma_desc_slot *desc, int src_idx,
dma_addr_t addr, unsigned char coef)
{
BUG();
}
static inline int
iop_chan_pq_zero_sum_slot_count(size_t len, int src_cnt, int *slots_per_op)
{
BUG();
return 0;
}
static inline void
iop_desc_init_pq_zero_sum(struct iop_adma_desc_slot *desc, int src_cnt,
unsigned long flags)
{
BUG();
}
static inline void
iop_desc_set_pq_zero_sum_byte_count(struct iop_adma_desc_slot *desc, u32 len)
{
BUG();
}
#define iop_desc_set_pq_zero_sum_src_addr iop_desc_set_pq_src_addr
static inline void
iop_desc_set_pq_zero_sum_addr(struct iop_adma_desc_slot *desc, int pq_idx,
dma_addr_t *src)
{
BUG();
}
static inline int iop_adma_get_max_xor(void)
{
return 32;
}
static inline int iop_adma_get_max_pq(void)
{
BUG();
return 0;
}
static inline u32 iop_chan_get_current_descriptor(struct iop_adma_chan *chan)
{
int id = chan->device->id;
......@@ -332,6 +395,11 @@ static inline int iop_chan_zero_sum_slot_count(size_t len, int src_cnt,
return slot_cnt;
}
static inline int iop_desc_is_pq(struct iop_adma_desc_slot *desc)
{
return 0;
}
static inline u32 iop_desc_get_dest_addr(struct iop_adma_desc_slot *desc,
struct iop_adma_chan *chan)
{
......@@ -349,6 +417,14 @@ static inline u32 iop_desc_get_dest_addr(struct iop_adma_desc_slot *desc,
return 0;
}
static inline u32 iop_desc_get_qdest_addr(struct iop_adma_desc_slot *desc,
struct iop_adma_chan *chan)
{
BUG();
return 0;
}
static inline u32 iop_desc_get_byte_count(struct iop_adma_desc_slot *desc,
struct iop_adma_chan *chan)
{
......@@ -756,13 +832,14 @@ static inline void iop_desc_set_block_fill_val(struct iop_adma_desc_slot *desc,
hw_desc->src[0] = val;
}
static inline int iop_desc_get_zero_result(struct iop_adma_desc_slot *desc)
static inline enum sum_check_flags
iop_desc_get_zero_result(struct iop_adma_desc_slot *desc)
{
struct iop3xx_desc_aau *hw_desc = desc->hw_desc;
struct iop3xx_aau_desc_ctrl desc_ctrl = hw_desc->desc_ctrl_field;
iop_paranoia(!(desc_ctrl.tx_complete && desc_ctrl.zero_result_en));
return desc_ctrl.zero_result_err;
return desc_ctrl.zero_result_err << SUM_CHECK_P;
}
static inline void iop_chan_append(struct iop_adma_chan *chan)
......
......@@ -86,6 +86,7 @@ struct iop_adma_chan {
* @idx: pool index
* @unmap_src_cnt: number of xor sources
* @unmap_len: transaction bytecount
* @tx_list: list of descriptors that are associated with one operation
* @async_tx: support for the async_tx api
* @group_list: list of slots that make up a multi-descriptor transaction
* for example transfer lengths larger than the supported hw max
......@@ -102,10 +103,12 @@ struct iop_adma_desc_slot {
u16 idx;
u16 unmap_src_cnt;
size_t unmap_len;
struct list_head tx_list;
struct dma_async_tx_descriptor async_tx;
union {
u32 *xor_check_result;
u32 *crc32_result;
u32 *pq_check_result;
};
};
......
/*
*
* Copyright (C) 2008-2009 ST-Ericsson AB
* License terms: GNU General Public License (GPL) version 2
*
* Author: Rickard Andersson <rickard.andersson@stericsson.com>
* Author: Linus Walleij <linus.walleij@stericsson.com>
*
*/
#ifndef __ASMARM_TCM_H
#define __ASMARM_TCM_H
#ifndef CONFIG_HAVE_TCM
#error "You should not be including tcm.h unless you have a TCM!"
#endif
#include <linux/compiler.h>
/* Tag variables with this */
#define __tcmdata __section(.tcm.data)
/* Tag constants with this */
#define __tcmconst __section(.tcm.rodata)
/* Tag functions inside TCM called from outside TCM with this */
#define __tcmfunc __attribute__((long_call)) __section(.tcm.text) noinline
/* Tag function inside TCM called from inside TCM with this */
#define __tcmlocalfunc __section(.tcm.text)
void *tcm_alloc(size_t len);
void tcm_free(void *addr, size_t len);
#endif
......@@ -35,7 +35,9 @@
#define ARM(x...)
#define THUMB(x...) x
#ifdef __ASSEMBLY__
#define W(instr) instr.w
#endif
#define BSYM(sym) sym + 1
#else /* !CONFIG_THUMB2_KERNEL */
......@@ -45,7 +47,9 @@
#define ARM(x...) x
#define THUMB(x...)
#ifdef __ASSEMBLY__
#define W(instr) instr
#endif
#define BSYM(sym) sym
#endif /* CONFIG_THUMB2_KERNEL */
......
......@@ -35,6 +35,7 @@ obj-$(CONFIG_OABI_COMPAT) += sys_oabi-compat.o
obj-$(CONFIG_ARM_THUMBEE) += thumbee.o
obj-$(CONFIG_KGDB) += kgdb.o
obj-$(CONFIG_ARM_UNWIND) += unwind.o
obj-$(CONFIG_HAVE_TCM) += tcm.o
obj-$(CONFIG_CRUNCH) += crunch.o crunch-bits.o
AFLAGS_crunch-bits.o := -Wa,-mcpu=ep9312
......
......@@ -272,7 +272,15 @@ __und_svc:
@
@ r0 - instruction
@
#ifndef CONFIG_THUMB2_KERNEL
ldr r0, [r2, #-4]
#else
ldrh r0, [r2, #-2] @ Thumb instruction at LR - 2
and r9, r0, #0xf800
cmp r9, #0xe800 @ 32-bit instruction if xx >= 0
ldrhhs r9, [r2] @ bottom 16 bits
orrhs r0, r9, r0, lsl #16
#endif
adr r9, BSYM(1f)
bl call_fpe
......@@ -678,7 +686,9 @@ ENTRY(fp_enter)
.word no_fp
.previous
no_fp: mov pc, lr
ENTRY(no_fp)
mov pc, lr
ENDPROC(no_fp)
__und_usr_unknown:
enable_irq
......@@ -734,13 +744,6 @@ ENTRY(__switch_to)
#ifdef CONFIG_MMU
ldr r6, [r2, #TI_CPU_DOMAIN]
#endif
#if __LINUX_ARM_ARCH__ >= 6
#ifdef CONFIG_CPU_32v6K
clrex
#else
strex r5, r4, [ip] @ Clear exclusive monitor
#endif
#endif
#if defined(CONFIG_HAS_TLS_REG)
mcr p15, 0, r3, c13, c0, 3 @ set TLS register
#elif !defined(CONFIG_TLS_REG_EMUL)
......
......@@ -76,13 +76,25 @@
#ifndef CONFIG_THUMB2_KERNEL
.macro svc_exit, rpsr
msr spsr_cxsf, \rpsr
#if defined(CONFIG_CPU_32v6K)
clrex @ clear the exclusive monitor
ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr
#elif defined (CONFIG_CPU_V6)
ldr r0, [sp]
strex r1, r2, [sp] @ clear the exclusive monitor
ldmib sp, {r1 - pc}^ @ load r1 - pc, cpsr
#endif
.endm
.macro restore_user_regs, fast = 0, offset = 0
ldr r1, [sp, #\offset + S_PSR] @ get calling cpsr
ldr lr, [sp, #\offset + S_PC]! @ get pc
msr spsr_cxsf, r1 @ save in spsr_svc
#if defined(CONFIG_CPU_32v6K)
clrex @ clear the exclusive monitor
#elif defined (CONFIG_CPU_V6)
strex r1, r2, [sp] @ clear the exclusive monitor
#endif
.if \fast
ldmdb sp, {r1 - lr}^ @ get calling r1 - lr
.else
......@@ -98,6 +110,7 @@
.endm
#else /* CONFIG_THUMB2_KERNEL */
.macro svc_exit, rpsr
clrex @ clear the exclusive monitor
ldr r0, [sp, #S_SP] @ top of the stack
ldr r1, [sp, #S_PC] @ return address
tst r0, #4 @ orig stack 8-byte aligned?
......@@ -110,6 +123,7 @@
.endm
.macro restore_user_regs, fast = 0, offset = 0
clrex @ clear the exclusive monitor
mov r2, sp
load_user_sp_lr r2, r3, \offset + S_SP @ calling sp, lr
ldr r1, [sp, #\offset + S_PSR] @ get calling cpsr
......
......@@ -22,6 +22,7 @@
#include <linux/kernel.h>
#include <linux/kprobes.h>
#include <linux/module.h>
#include <linux/stop_machine.h>
#include <linux/stringify.h>
#include <asm/traps.h>
#include <asm/cacheflush.h>
......@@ -83,10 +84,24 @@ void __kprobes arch_arm_kprobe(struct kprobe *p)
flush_insns(p->addr, 1);
}
/*
* The actual disarming is done here on each CPU and synchronized using
* stop_machine. This synchronization is necessary on SMP to avoid removing
* a probe between the moment the 'Undefined Instruction' exception is raised
* and the moment the exception handler reads the faulting instruction from
* memory.
*/
int __kprobes __arch_disarm_kprobe(void *p)
{
struct kprobe *kp = p;
*kp->addr = kp->opcode;
flush_insns(kp->addr, 1);
return 0;
}
void __kprobes arch_disarm_kprobe(struct kprobe *p)
{
*p->addr = p->opcode;
flush_insns(p->addr, 1);
stop_machine(__arch_disarm_kprobe, p, &cpu_online_map);
}
void __kprobes arch_remove_kprobe(struct kprobe *p)
......
......@@ -45,6 +45,7 @@
#include "compat.h"
#include "atags.h"
#include "tcm.h"
#ifndef MEM_SIZE
#define MEM_SIZE (16*1024*1024)
......@@ -749,6 +750,7 @@ void __init setup_arch(char **cmdline_p)
#endif
cpu_init();
tcm_init();
/*
* Set up various architecture-specific pointers
......
/*
* Copyright (C) 2008-2009 ST-Ericsson AB
* License terms: GNU General Public License (GPL) version 2
* TCM memory handling for ARM systems
*
* Author: Linus Walleij <linus.walleij@stericsson.com>
* Author: Rickard Andersson <rickard.andersson@stericsson.com>
*/
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/stddef.h>
#include <linux/ioport.h>
#include <linux/genalloc.h>
#include <linux/string.h> /* memcpy */
#include <asm/page.h> /* PAGE_SHIFT */
#include <asm/cputype.h>
#include <asm/mach/map.h>
#include <mach/memory.h>
#include "tcm.h"
/* Scream and warn about misuse */
#if !defined(ITCM_OFFSET) || !defined(ITCM_END) || \
!defined(DTCM_OFFSET) || !defined(DTCM_END)
#error "TCM support selected but offsets not defined!"
#endif
static struct gen_pool *tcm_pool;
/* TCM section definitions from the linker */
extern char __itcm_start, __sitcm_text, __eitcm_text;
extern char __dtcm_start, __sdtcm_data, __edtcm_data;
/*
* TCM memory resources
*/
static struct resource dtcm_res = {
.name = "DTCM RAM",
.start = DTCM_OFFSET,
.end = DTCM_END,
.flags = IORESOURCE_MEM
};
static struct resource itcm_res = {
.name = "ITCM RAM",
.start = ITCM_OFFSET,
.end = ITCM_END,
.flags = IORESOURCE_MEM
};
static struct map_desc dtcm_iomap[] __initdata = {
{
.virtual = DTCM_OFFSET,
.pfn = __phys_to_pfn(DTCM_OFFSET),
.length = (DTCM_END - DTCM_OFFSET + 1),
.type = MT_UNCACHED
}
};
static struct map_desc itcm_iomap[] __initdata = {
{
.virtual = ITCM_OFFSET,
.pfn = __phys_to_pfn(ITCM_OFFSET),
.length = (ITCM_END - ITCM_OFFSET + 1),
.type = MT_UNCACHED
}
};
/*
* Allocate a chunk of TCM memory
*/
void *tcm_alloc(size_t len)
{
unsigned long vaddr;
if (!tcm_pool)
return NULL;
vaddr = gen_pool_alloc(tcm_pool, len);
if (!vaddr)
return NULL;
return (void *) vaddr;
}
EXPORT_SYMBOL(tcm_alloc);
/*
* Free a chunk of TCM memory
*/
void tcm_free(void *addr, size_t len)
{
gen_pool_free(tcm_pool, (unsigned long) addr, len);
}
EXPORT_SYMBOL(tcm_free);
static void __init setup_tcm_bank(u8 type, u32 offset, u32 expected_size)
{
const int tcm_sizes[16] = { 0, -1, -1, 4, 8, 16, 32, 64, 128,
256, 512, 1024, -1, -1, -1, -1 };
u32 tcm_region;
int tcm_size;
/* Read the special TCM region register c9, 0 */
if (!type)
asm("mrc p15, 0, %0, c9, c1, 0"
: "=r" (tcm_region));
else
asm("mrc p15, 0, %0, c9, c1, 1"
: "=r" (tcm_region));
tcm_size = tcm_sizes[(tcm_region >> 2) & 0x0f];
if (tcm_size < 0) {
pr_err("CPU: %sTCM of unknown size!\n",
type ? "I" : "D");
} else {
pr_info("CPU: found %sTCM %dk @ %08x, %senabled\n",
type ? "I" : "D",
tcm_size,
(tcm_region & 0xfffff000U),
(tcm_region & 1) ? "" : "not ");
}
if (tcm_size != expected_size) {
pr_crit("CPU: %sTCM was detected %dk but expected %dk!\n",
type ? "I" : "D",
tcm_size,
expected_size);
/* Adjust to the expected size? what can we do... */
}
/* Force move the TCM bank to where we want it, enable */
tcm_region = offset | (tcm_region & 0x00000ffeU) | 1;
if (!type)
asm("mcr p15, 0, %0, c9, c1, 0"
: /* No output operands */
: "r" (tcm_region));
else
asm("mcr p15, 0, %0, c9, c1, 1"
: /* No output operands */
: "r" (tcm_region));
pr_debug("CPU: moved %sTCM %dk to %08x, enabled\n",
type ? "I" : "D",
tcm_size,
(tcm_region & 0xfffff000U));
}
/*
* This initializes the TCM memory
*/
void __init tcm_init(void)
{
u32 tcm_status = read_cpuid_tcmstatus();
char *start;
char *end;
char *ram;
/* Setup DTCM if present */
if (tcm_status & (1 << 16)) {
setup_tcm_bank(0, DTCM_OFFSET,
(DTCM_END - DTCM_OFFSET + 1) >> 10);
request_resource(&iomem_resource, &dtcm_res);
iotable_init(dtcm_iomap, 1);
/* Copy data from RAM to DTCM */
start = &__sdtcm_data;
end = &__edtcm_data;
ram = &__dtcm_start;
memcpy(start, ram, (end-start));
pr_debug("CPU DTCM: copied data from %p - %p\n", start, end);
}
/* Setup ITCM if present */
if (tcm_status & 1) {
setup_tcm_bank(1, ITCM_OFFSET,
(ITCM_END - ITCM_OFFSET + 1) >> 10);
request_resource(&iomem_resource, &itcm_res);
iotable_init(itcm_iomap, 1);
/* Copy code from RAM to ITCM */
start = &__sitcm_text;
end = &__eitcm_text;
ram = &__itcm_start;
memcpy(start, ram, (end-start));
pr_debug("CPU ITCM: copied code from %p - %p\n", start, end);
}
}
/*
* This creates the TCM memory pool and has to be done later,
* during the core_initicalls, since the allocator is not yet
* up and running when the first initialization runs.
*/
static int __init setup_tcm_pool(void)
{
u32 tcm_status = read_cpuid_tcmstatus();
u32 dtcm_pool_start = (u32) &__edtcm_data;
u32 itcm_pool_start = (u32) &__eitcm_text;
int ret;
/*
* Set up malloc pool, 2^2 = 4 bytes granularity since
* the TCM is sometimes just 4 KiB. NB: pages and cache
* line alignments does not matter in TCM!
*/
tcm_pool = gen_pool_create(2, -1);
pr_debug("Setting up TCM memory pool\n");
/* Add the rest of DTCM to the TCM pool */
if (tcm_status & (1 << 16)) {
if (dtcm_pool_start < DTCM_END) {
ret = gen_pool_add(tcm_pool, dtcm_pool_start,
DTCM_END - dtcm_pool_start + 1, -1);
if (ret) {
pr_err("CPU DTCM: could not add DTCM " \
"remainder to pool!\n");
return ret;
}
pr_debug("CPU DTCM: Added %08x bytes @ %08x to " \
"the TCM memory pool\n",
DTCM_END - dtcm_pool_start + 1,
dtcm_pool_start);
}
}
/* Add the rest of ITCM to the TCM pool */
if (tcm_status & 1) {
if (itcm_pool_start < ITCM_END) {
ret = gen_pool_add(tcm_pool, itcm_pool_start,
ITCM_END - itcm_pool_start + 1, -1);
if (ret) {
pr_err("CPU ITCM: could not add ITCM " \
"remainder to pool!\n");
return ret;
}
pr_debug("CPU ITCM: Added %08x bytes @ %08x to " \
"the TCM memory pool\n",
ITCM_END - itcm_pool_start + 1,
itcm_pool_start);
}
}
return 0;
}
core_initcall(setup_tcm_pool);
/*
* Copyright (C) 2008-2009 ST-Ericsson AB
* License terms: GNU General Public License (GPL) version 2
* TCM memory handling for ARM systems
*
* Author: Linus Walleij <linus.walleij@stericsson.com>
* Author: Rickard Andersson <rickard.andersson@stericsson.com>
*/
#ifdef CONFIG_HAVE_TCM
void __init tcm_init(void);
#else
/* No TCM support, just blank inlines to be optimized out */
inline void tcm_init(void)
{
}
#endif
......@@ -199,6 +199,63 @@ SECTIONS
}
_edata_loc = __data_loc + SIZEOF(.data);
#ifdef CONFIG_HAVE_TCM
/*
* We align everything to a page boundary so we can
* free it after init has commenced and TCM contents have
* been copied to its destination.
*/
.tcm_start : {
. = ALIGN(PAGE_SIZE);
__tcm_start = .;
__itcm_start = .;
}
/*
* Link these to the ITCM RAM
* Put VMA to the TCM address and LMA to the common RAM
* and we'll upload the contents from RAM to TCM and free
* the used RAM after that.
*/
.text_itcm ITCM_OFFSET : AT(__itcm_start)
{
__sitcm_text = .;
*(.tcm.text)
*(.tcm.rodata)
. = ALIGN(4);
__eitcm_text = .;
}
/*
* Reset the dot pointer, this is needed to create the
* relative __dtcm_start below (to be used as extern in code).
*/
. = ADDR(.tcm_start) + SIZEOF(.tcm_start) + SIZEOF(.text_itcm);
.dtcm_start : {
__dtcm_start = .;
}
/* TODO: add remainder of ITCM as well, that can be used for data! */
.data_dtcm DTCM_OFFSET : AT(__dtcm_start)
{
. = ALIGN(4);
__sdtcm_data = .;
*(.tcm.data)
. = ALIGN(4);
__edtcm_data = .;
}
/* Reset the dot pointer or the linker gets confused */
. = ADDR(.dtcm_start) + SIZEOF(.data_dtcm);
/* End marker for freeing TCM copy in linked object */
.tcm_end : AT(ADDR(.dtcm_start) + SIZEOF(.data_dtcm)){
. = ALIGN(PAGE_SIZE);
__tcm_end = .;
}
#endif
.bss : {
__bss_start = .; /* BSS */
*(.bss)
......
......@@ -12,8 +12,9 @@
#include <linux/linkage.h>
#include <asm/assembler.h>
#include <asm/asm-offsets.h>
#include <asm/cache.h>
#define COPY_COUNT (PAGE_SZ/64 PLD( -1 ))
#define COPY_COUNT (PAGE_SZ / (2 * L1_CACHE_BYTES) PLD( -1 ))
.text
.align 5
......@@ -26,17 +27,16 @@
ENTRY(copy_page)
stmfd sp!, {r4, lr} @ 2
PLD( pld [r1, #0] )
PLD( pld [r1, #32] )
PLD( pld [r1, #L1_CACHE_BYTES] )
mov r2, #COPY_COUNT @ 1
ldmia r1!, {r3, r4, ip, lr} @ 4+1
1: PLD( pld [r1, #64] )
PLD( pld [r1, #96] )
2: stmia r0!, {r3, r4, ip, lr} @ 4
ldmia r1!, {r3, r4, ip, lr} @ 4+1
stmia r0!, {r3, r4, ip, lr} @ 4
ldmia r1!, {r3, r4, ip, lr} @ 4+1
1: PLD( pld [r1, #2 * L1_CACHE_BYTES])
PLD( pld [r1, #3 * L1_CACHE_BYTES])
2:
.rept (2 * L1_CACHE_BYTES / 16 - 1)
stmia r0!, {r3, r4, ip, lr} @ 4
ldmia r1!, {r3, r4, ip, lr} @ 4
.endr
subs r2, r2, #1 @ 1
stmia r0!, {r3, r4, ip, lr} @ 4
ldmgtia r1!, {r3, r4, ip, lr} @ 4
......
......@@ -771,9 +771,9 @@ void __init at91_add_device_pwm(u32 mask) {}
* AC97
* -------------------------------------------------------------------- */
#if defined(CONFIG_SND_AT91_AC97) || defined(CONFIG_SND_AT91_AC97_MODULE)
#if defined(CONFIG_SND_ATMEL_AC97C) || defined(CONFIG_SND_ATMEL_AC97C_MODULE)
static u64 ac97_dmamask = DMA_BIT_MASK(32);
static struct atmel_ac97_data ac97_data;
static struct ac97c_platform_data ac97_data;
static struct resource ac97_resources[] = {
[0] = {
......@@ -789,7 +789,7 @@ static struct resource ac97_resources[] = {
};
static struct platform_device at91cap9_ac97_device = {
.name = "ac97c",
.name = "atmel_ac97c",
.id = 1,
.dev = {
.dma_mask = &ac97_dmamask,
......@@ -800,7 +800,7 @@ static struct platform_device at91cap9_ac97_device = {
.num_resources = ARRAY_SIZE(ac97_resources),
};
void __init at91_add_device_ac97(struct atmel_ac97_data *data)
void __init at91_add_device_ac97(struct ac97c_platform_data *data)
{
if (!data)
return;
......@@ -818,7 +818,7 @@ void __init at91_add_device_ac97(struct atmel_ac97_data *data)
platform_device_register(&at91cap9_ac97_device);
}
#else
void __init at91_add_device_ac97(struct atmel_ac97_data *data) {}
void __init at91_add_device_ac97(struct ac97c_platform_data *data) {}
#endif
......
......@@ -364,7 +364,7 @@ static struct atmel_lcdfb_info __initdata cap9adk_lcdc_data;
/*
* AC97
*/
static struct atmel_ac97_data cap9adk_ac97_data = {
static struct ac97c_platform_data cap9adk_ac97_data = {
// .reset_pin = ... not connected
};
......
......@@ -340,7 +340,7 @@ static void __init neocore926_add_device_buttons(void) {}
/*
* AC97
*/
static struct atmel_ac97_data neocore926_ac97_data = {
static struct ac97c_platform_data neocore926_ac97_data = {
.reset_pin = AT91_PIN_PA13,
};
......
......@@ -310,6 +310,14 @@ static void __init ek_add_device_buttons(void) {}
#endif
/*
* AC97
* reset_pin is not connected: NRST
*/
static struct ac97c_platform_data ek_ac97_data = {
};
/*
* LEDs ... these could all be PWM-driven, for variable brightness
*/
......@@ -372,6 +380,8 @@ static void __init ek_board_init(void)
at91_add_device_lcdc(&ek_lcdc_data);
/* Push Buttons */
ek_add_device_buttons();
/* AC97 */
at91_add_device_ac97(&ek_ac97_data);
/* LEDs */
at91_gpio_leds(ek_leds, ARRAY_SIZE(ek_leds));
at91_pwm_leds(ek_pwm_led, ARRAY_SIZE(ek_pwm_led));
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册