提交 ac7c5353 编写于 作者: P Paul Mackerras

Merge branch 'linux-2.6'

...@@ -271,8 +271,6 @@ netlabel/ ...@@ -271,8 +271,6 @@ netlabel/
- directory with information on the NetLabel subsystem. - directory with information on the NetLabel subsystem.
networking/ networking/
- directory with info on various aspects of networking with Linux. - directory with info on various aspects of networking with Linux.
nfsroot.txt
- short guide on setting up a diskless box with NFS root filesystem.
nmi_watchdog.txt nmi_watchdog.txt
- info on NMI watchdog for SMP systems. - info on NMI watchdog for SMP systems.
nommu-mmap.txt nommu-mmap.txt
...@@ -321,8 +319,6 @@ robust-futexes.txt ...@@ -321,8 +319,6 @@ robust-futexes.txt
- a description of what robust futexes are. - a description of what robust futexes are.
rocket.txt rocket.txt
- info on the Comtrol RocketPort multiport serial driver. - info on the Comtrol RocketPort multiport serial driver.
rpc-cache.txt
- introduction to the caching mechanisms in the sunrpc layer.
rt-mutex-design.txt rt-mutex-design.txt
- description of the RealTime mutex implementation design. - description of the RealTime mutex implementation design.
rt-mutex.txt rt-mutex.txt
......
...@@ -328,7 +328,7 @@ now, but you can do this to mark internal company procedures or just ...@@ -328,7 +328,7 @@ now, but you can do this to mark internal company procedures or just
point out some special detail about the sign-off. point out some special detail about the sign-off.
13) When to use Acked-by: 13) When to use Acked-by: and Cc:
The Signed-off-by: tag indicates that the signer was involved in the The Signed-off-by: tag indicates that the signer was involved in the
development of the patch, or that he/she was in the patch's delivery path. development of the patch, or that he/she was in the patch's delivery path.
...@@ -349,11 +349,59 @@ Acked-by: does not necessarily indicate acknowledgement of the entire patch. ...@@ -349,11 +349,59 @@ Acked-by: does not necessarily indicate acknowledgement of the entire patch.
For example, if a patch affects multiple subsystems and has an Acked-by: from For example, if a patch affects multiple subsystems and has an Acked-by: from
one subsystem maintainer then this usually indicates acknowledgement of just one subsystem maintainer then this usually indicates acknowledgement of just
the part which affects that maintainer's code. Judgement should be used here. the part which affects that maintainer's code. Judgement should be used here.
When in doubt people should refer to the original discussion in the mailing When in doubt people should refer to the original discussion in the mailing
list archives. list archives.
If a person has had the opportunity to comment on a patch, but has not
provided such comments, you may optionally add a "Cc:" tag to the patch.
This is the only tag which might be added without an explicit action by the
person it names. This tag documents that potentially interested parties
have been included in the discussion
14) The canonical patch format
14) Using Test-by: and Reviewed-by:
A Tested-by: tag indicates that the patch has been successfully tested (in
some environment) by the person named. This tag informs maintainers that
some testing has been performed, provides a means to locate testers for
future patches, and ensures credit for the testers.
Reviewed-by:, instead, indicates that the patch has been reviewed and found
acceptable according to the Reviewer's Statement:
Reviewer's statement of oversight
By offering my Reviewed-by: tag, I state that:
(a) I have carried out a technical review of this patch to
evaluate its appropriateness and readiness for inclusion into
the mainline kernel.
(b) Any problems, concerns, or questions relating to the patch
have been communicated back to the submitter. I am satisfied
with the submitter's response to my comments.
(c) While there may be things that could be improved with this
submission, I believe that it is, at this time, (1) a
worthwhile modification to the kernel, and (2) free of known
issues which would argue against its inclusion.
(d) While I have reviewed the patch and believe it to be sound, I
do not (unless explicitly stated elsewhere) make any
warranties or guarantees that it will achieve its stated
purpose or function properly in any given situation.
A Reviewed-by tag is a statement of opinion that the patch is an
appropriate modification of the kernel without any remaining serious
technical issues. Any interested reviewer (who has done the work) can
offer a Reviewed-by tag for a patch. This tag serves to give credit to
reviewers and to inform maintainers of the degree of review which has been
done on the patch. Reviewed-by: tags, when supplied by reviewers known to
understand the subject area and to perform thorough reviews, will normally
increase the liklihood of your patch getting into the kernel.
15) The canonical patch format
The canonical patch subject line is: The canonical patch subject line is:
...@@ -512,7 +560,7 @@ They provide type safety, have no length limitations, no formatting ...@@ -512,7 +560,7 @@ They provide type safety, have no length limitations, no formatting
limitations, and under gcc they are as cheap as macros. limitations, and under gcc they are as cheap as macros.
Macros should only be used for cases where a static inline is clearly Macros should only be used for cases where a static inline is clearly
suboptimal [there a few, isolated cases of this in fast paths], suboptimal [there are a few, isolated cases of this in fast paths],
or where it is impossible to use a static inline function [such as or where it is impossible to use a static inline function [such as
string-izing]. string-izing].
......
...@@ -66,6 +66,8 @@ mandatory-locking.txt ...@@ -66,6 +66,8 @@ mandatory-locking.txt
- info on the Linux implementation of Sys V mandatory file locking. - info on the Linux implementation of Sys V mandatory file locking.
ncpfs.txt ncpfs.txt
- info on Novell Netware(tm) filesystem using NCP protocol. - info on Novell Netware(tm) filesystem using NCP protocol.
nfsroot.txt
- short guide on setting up a diskless box with NFS root filesystem.
ntfs.txt ntfs.txt
- info and mount options for the NTFS filesystem (Windows NT). - info and mount options for the NTFS filesystem (Windows NT).
ocfs2.txt ocfs2.txt
...@@ -82,6 +84,10 @@ relay.txt ...@@ -82,6 +84,10 @@ relay.txt
- info on relay, for efficient streaming from kernel to user space. - info on relay, for efficient streaming from kernel to user space.
romfs.txt romfs.txt
- description of the ROMFS filesystem. - description of the ROMFS filesystem.
rpc-cache.txt
- introduction to the caching mechanisms in the sunrpc layer.
seq_file.txt
- how to use the seq_file API
sharedsubtree.txt sharedsubtree.txt
- a description of shared subtrees for namespaces. - a description of shared subtrees for namespaces.
smbfs.txt smbfs.txt
......
The seq_file interface
Copyright 2003 Jonathan Corbet <corbet@lwn.net>
This file is originally from the LWN.net Driver Porting series at
http://lwn.net/Articles/driver-porting/
There are numerous ways for a device driver (or other kernel component) to
provide information to the user or system administrator. One useful
technique is the creation of virtual files, in debugfs, /proc or elsewhere.
Virtual files can provide human-readable output that is easy to get at
without any special utility programs; they can also make life easier for
script writers. It is not surprising that the use of virtual files has
grown over the years.
Creating those files correctly has always been a bit of a challenge,
however. It is not that hard to make a virtual file which returns a
string. But life gets trickier if the output is long - anything greater
than an application is likely to read in a single operation. Handling
multiple reads (and seeks) requires careful attention to the reader's
position within the virtual file - that position is, likely as not, in the
middle of a line of output. The kernel has traditionally had a number of
implementations that got this wrong.
The 2.6 kernel contains a set of functions (implemented by Alexander Viro)
which are designed to make it easy for virtual file creators to get it
right.
The seq_file interface is available via <linux/seq_file.h>. There are
three aspects to seq_file:
* An iterator interface which lets a virtual file implementation
step through the objects it is presenting.
* Some utility functions for formatting objects for output without
needing to worry about things like output buffers.
* A set of canned file_operations which implement most operations on
the virtual file.
We'll look at the seq_file interface via an extremely simple example: a
loadable module which creates a file called /proc/sequence. The file, when
read, simply produces a set of increasing integer values, one per line. The
sequence will continue until the user loses patience and finds something
better to do. The file is seekable, in that one can do something like the
following:
dd if=/proc/sequence of=out1 count=1
dd if=/proc/sequence skip=1 out=out2 count=1
Then concatenate the output files out1 and out2 and get the right
result. Yes, it is a thoroughly useless module, but the point is to show
how the mechanism works without getting lost in other details. (Those
wanting to see the full source for this module can find it at
http://lwn.net/Articles/22359/).
The iterator interface
Modules implementing a virtual file with seq_file must implement a simple
iterator object that allows stepping through the data of interest.
Iterators must be able to move to a specific position - like the file they
implement - but the interpretation of that position is up to the iterator
itself. A seq_file implementation that is formatting firewall rules, for
example, could interpret position N as the Nth rule in the chain.
Positioning can thus be done in whatever way makes the most sense for the
generator of the data, which need not be aware of how a position translates
to an offset in the virtual file. The one obvious exception is that a
position of zero should indicate the beginning of the file.
The /proc/sequence iterator just uses the count of the next number it
will output as its position.
Four functions must be implemented to make the iterator work. The first,
called start() takes a position as an argument and returns an iterator
which will start reading at that position. For our simple sequence example,
the start() function looks like:
static void *ct_seq_start(struct seq_file *s, loff_t *pos)
{
loff_t *spos = kmalloc(sizeof(loff_t), GFP_KERNEL);
if (! spos)
return NULL;
*spos = *pos;
return spos;
}
The entire data structure for this iterator is a single loff_t value
holding the current position. There is no upper bound for the sequence
iterator, but that will not be the case for most other seq_file
implementations; in most cases the start() function should check for a
"past end of file" condition and return NULL if need be.
For more complicated applications, the private field of the seq_file
structure can be used. There is also a special value whch can be returned
by the start() function called SEQ_START_TOKEN; it can be used if you wish
to instruct your show() function (described below) to print a header at the
top of the output. SEQ_START_TOKEN should only be used if the offset is
zero, however.
The next function to implement is called, amazingly, next(); its job is to
move the iterator forward to the next position in the sequence. The
example module can simply increment the position by one; more useful
modules will do what is needed to step through some data structure. The
next() function returns a new iterator, or NULL if the sequence is
complete. Here's the example version:
static void *ct_seq_next(struct seq_file *s, void *v, loff_t *pos)
{
loff_t *spos = v;
*pos = ++*spos;
return spos;
}
The stop() function is called when iteration is complete; its job, of
course, is to clean up. If dynamic memory is allocated for the iterator,
stop() is the place to free it.
static void ct_seq_stop(struct seq_file *s, void *v)
{
kfree(v);
}
Finally, the show() function should format the object currently pointed to
by the iterator for output. It should return zero, or an error code if
something goes wrong. The example module's show() function is:
static int ct_seq_show(struct seq_file *s, void *v)
{
loff_t *spos = v;
seq_printf(s, "%lld\n", (long long)*spos);
return 0;
}
We will look at seq_printf() in a moment. But first, the definition of the
seq_file iterator is finished by creating a seq_operations structure with
the four functions we have just defined:
static const struct seq_operations ct_seq_ops = {
.start = ct_seq_start,
.next = ct_seq_next,
.stop = ct_seq_stop,
.show = ct_seq_show
};
This structure will be needed to tie our iterator to the /proc file in
a little bit.
It's worth noting that the interator value returned by start() and
manipulated by the other functions is considered to be completely opaque by
the seq_file code. It can thus be anything that is useful in stepping
through the data to be output. Counters can be useful, but it could also be
a direct pointer into an array or linked list. Anything goes, as long as
the programmer is aware that things can happen between calls to the
iterator function. However, the seq_file code (by design) will not sleep
between the calls to start() and stop(), so holding a lock during that time
is a reasonable thing to do. The seq_file code will also avoid taking any
other locks while the iterator is active.
Formatted output
The seq_file code manages positioning within the output created by the
iterator and getting it into the user's buffer. But, for that to work, that
output must be passed to the seq_file code. Some utility functions have
been defined which make this task easy.
Most code will simply use seq_printf(), which works pretty much like
printk(), but which requires the seq_file pointer as an argument. It is
common to ignore the return value from seq_printf(), but a function
producing complicated output may want to check that value and quit if
something non-zero is returned; an error return means that the seq_file
buffer has been filled and further output will be discarded.
For straight character output, the following functions may be used:
int seq_putc(struct seq_file *m, char c);
int seq_puts(struct seq_file *m, const char *s);
int seq_escape(struct seq_file *m, const char *s, const char *esc);
The first two output a single character and a string, just like one would
expect. seq_escape() is like seq_puts(), except that any character in s
which is in the string esc will be represented in octal form in the output.
There is also a function for printing filenames:
int seq_path(struct seq_file *m, struct path *path, char *esc);
Here, path indicates the file of interest, and esc is a set of characters
which should be escaped in the output.
Making it all work
So far, we have a nice set of functions which can produce output within the
seq_file system, but we have not yet turned them into a file that a user
can see. Creating a file within the kernel requires, of course, the
creation of a set of file_operations which implement the operations on that
file. The seq_file interface provides a set of canned operations which do
most of the work. The virtual file author still must implement the open()
method, however, to hook everything up. The open function is often a single
line, as in the example module:
static int ct_open(struct inode *inode, struct file *file)
{
return seq_open(file, &ct_seq_ops);
}
Here, the call to seq_open() takes the seq_operations structure we created
before, and gets set up to iterate through the virtual file.
On a successful open, seq_open() stores the struct seq_file pointer in
file->private_data. If you have an application where the same iterator can
be used for more than one file, you can store an arbitrary pointer in the
private field of the seq_file structure; that value can then be retrieved
by the iterator functions.
The other operations of interest - read(), llseek(), and release() - are
all implemented by the seq_file code itself. So a virtual file's
file_operations structure will look like:
static const struct file_operations ct_file_ops = {
.owner = THIS_MODULE,
.open = ct_open,
.read = seq_read,
.llseek = seq_lseek,
.release = seq_release
};
There is also a seq_release_private() which passes the contents of the
seq_file private field to kfree() before releasing the structure.
The final step is the creation of the /proc file itself. In the example
code, that is done in the initialization code in the usual way:
static int ct_init(void)
{
struct proc_dir_entry *entry;
entry = create_proc_entry("sequence", 0, NULL);
if (entry)
entry->proc_fops = &ct_file_ops;
return 0;
}
module_init(ct_init);
And that is pretty much it.
seq_list
If your file will be iterating through a linked list, you may find these
routines useful:
struct list_head *seq_list_start(struct list_head *head,
loff_t pos);
struct list_head *seq_list_start_head(struct list_head *head,
loff_t pos);
struct list_head *seq_list_next(void *v, struct list_head *head,
loff_t *ppos);
These helpers will interpret pos as a position within the list and iterate
accordingly. Your start() and next() functions need only invoke the
seq_list_* helpers with a pointer to the appropriate list_head structure.
The extra-simple version
For extremely simple virtual files, there is an even easier interface. A
module can define only the show() function, which should create all the
output that the virtual file will contain. The file's open() method then
calls:
int single_open(struct file *file,
int (*show)(struct seq_file *m, void *p),
void *data);
When output time comes, the show() function will be called once. The data
value given to single_open() can be found in the private field of the
seq_file structure. When using single_open(), the programmer should use
single_release() instead of seq_release() in the file_operations structure
to avoid a memory leak.
...@@ -98,7 +98,7 @@ System-level global event devices are used for the Linux periodic tick. Per-CPU ...@@ -98,7 +98,7 @@ System-level global event devices are used for the Linux periodic tick. Per-CPU
event devices are used to provide local CPU functionality such as process event devices are used to provide local CPU functionality such as process
accounting, profiling, and high resolution timers. accounting, profiling, and high resolution timers.
The management layer assignes one or more of the folliwing functions to a clock The management layer assigns one or more of the following functions to a clock
event device: event device:
- system global periodic tick (jiffies update) - system global periodic tick (jiffies update)
- cpu local update_process_times - cpu local update_process_times
......
...@@ -70,7 +70,7 @@ Every PCI card emits a PCI IRQ, which can be INTA, INTB, INTC or INTD: ...@@ -70,7 +70,7 @@ Every PCI card emits a PCI IRQ, which can be INTA, INTB, INTC or INTD:
These INTA-D PCI IRQs are always 'local to the card', their real meaning These INTA-D PCI IRQs are always 'local to the card', their real meaning
depends on which slot they are in. If you look at the daisy chaining diagram, depends on which slot they are in. If you look at the daisy chaining diagram,
a card in slot4, issuing INTA IRQ, it will end up as a signal on PIRQ2 of a card in slot4, issuing INTA IRQ, it will end up as a signal on PIRQ4 of
the PCI chipset. Most cards issue INTA, this creates optimal distribution the PCI chipset. Most cards issue INTA, this creates optimal distribution
between the PIRQ lines. (distributing IRQ sources properly is not a between the PIRQ lines. (distributing IRQ sources properly is not a
necessity, PCI IRQs can be shared at will, but it's a good for performance necessity, PCI IRQs can be shared at will, but it's a good for performance
......
...@@ -170,11 +170,6 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -170,11 +170,6 @@ and is between 256 and 4096 characters. It is defined in the file
acpi_irq_isa= [HW,ACPI] If irq_balance, mark listed IRQs used by ISA acpi_irq_isa= [HW,ACPI] If irq_balance, mark listed IRQs used by ISA
Format: <irq>,<irq>... Format: <irq>,<irq>...
acpi_new_pts_ordering [HW,ACPI]
Enforce the ACPI 2.0 ordering of the _PTS control
method wrt putting devices into low power states
default: pre ACPI 2.0 ordering of _PTS
acpi_no_auto_ssdt [HW,ACPI] Disable automatic loading of SSDT acpi_no_auto_ssdt [HW,ACPI] Disable automatic loading of SSDT
acpi_os_name= [HW,ACPI] Tell ACPI BIOS the name of the OS acpi_os_name= [HW,ACPI] Tell ACPI BIOS the name of the OS
...@@ -380,6 +375,10 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -380,6 +375,10 @@ and is between 256 and 4096 characters. It is defined in the file
ccw_timeout_log [S390] ccw_timeout_log [S390]
See Documentation/s390/CommonIO for details. See Documentation/s390/CommonIO for details.
cgroup_disable= [KNL] Disable a particular controller
Format: {name of the controller(s) to disable}
{Currently supported controllers - "memory"}
checkreqprot [SELINUX] Set initial checkreqprot flag value. checkreqprot [SELINUX] Set initial checkreqprot flag value.
Format: { "0" | "1" } Format: { "0" | "1" }
See security/selinux/Kconfig help text. See security/selinux/Kconfig help text.
...@@ -845,7 +844,7 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -845,7 +844,7 @@ and is between 256 and 4096 characters. It is defined in the file
arch/alpha/kernel/core_marvel.c. arch/alpha/kernel/core_marvel.c.
ip= [IP_PNP] ip= [IP_PNP]
See Documentation/nfsroot.txt. See Documentation/filesystems/nfsroot.txt.
ip2= [HW] Set IO/IRQ pairs for up to 4 IntelliPort boards ip2= [HW] Set IO/IRQ pairs for up to 4 IntelliPort boards
See comment before ip2_setup() in See comment before ip2_setup() in
...@@ -1201,10 +1200,10 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -1201,10 +1200,10 @@ and is between 256 and 4096 characters. It is defined in the file
file if at all. file if at all.
nfsaddrs= [NFS] nfsaddrs= [NFS]
See Documentation/nfsroot.txt. See Documentation/filesystems/nfsroot.txt.
nfsroot= [NFS] nfs root filesystem for disk-less boxes. nfsroot= [NFS] nfs root filesystem for disk-less boxes.
See Documentation/nfsroot.txt. See Documentation/filesystems/nfsroot.txt.
nfs.callback_tcpport= nfs.callback_tcpport=
[NFS] set the TCP port on which the NFSv4 callback [NFS] set the TCP port on which the NFSv4 callback
......
/*P:100 This is the Launcher code, a simple program which lays out the /*P:100 This is the Launcher code, a simple program which lays out the
* "physical" memory for the new Guest by mapping the kernel image and the * "physical" memory for the new Guest by mapping the kernel image and
* virtual devices, then reads repeatedly from /dev/lguest to run the Guest. * the virtual devices, then opens /dev/lguest to tell the kernel
:*/ * about the Guest and control it. :*/
#define _LARGEFILE64_SOURCE #define _LARGEFILE64_SOURCE
#define _GNU_SOURCE #define _GNU_SOURCE
#include <stdio.h> #include <stdio.h>
...@@ -43,7 +43,7 @@ ...@@ -43,7 +43,7 @@
#include "linux/virtio_console.h" #include "linux/virtio_console.h"
#include "linux/virtio_ring.h" #include "linux/virtio_ring.h"
#include "asm-x86/bootparam.h" #include "asm-x86/bootparam.h"
/*L:110 We can ignore the 38 include files we need for this program, but I do /*L:110 We can ignore the 39 include files we need for this program, but I do
* want to draw attention to the use of kernel-style types. * want to draw attention to the use of kernel-style types.
* *
* As Linus said, "C is a Spartan language, and so should your naming be." I * As Linus said, "C is a Spartan language, and so should your naming be." I
...@@ -320,7 +320,7 @@ static unsigned long map_elf(int elf_fd, const Elf32_Ehdr *ehdr) ...@@ -320,7 +320,7 @@ static unsigned long map_elf(int elf_fd, const Elf32_Ehdr *ehdr)
err(1, "Reading program headers"); err(1, "Reading program headers");
/* Try all the headers: there are usually only three. A read-only one, /* Try all the headers: there are usually only three. A read-only one,
* a read-write one, and a "note" section which isn't loadable. */ * a read-write one, and a "note" section which we don't load. */
for (i = 0; i < ehdr->e_phnum; i++) { for (i = 0; i < ehdr->e_phnum; i++) {
/* If this isn't a loadable segment, we ignore it */ /* If this isn't a loadable segment, we ignore it */
if (phdr[i].p_type != PT_LOAD) if (phdr[i].p_type != PT_LOAD)
...@@ -387,7 +387,7 @@ static unsigned long load_kernel(int fd) ...@@ -387,7 +387,7 @@ static unsigned long load_kernel(int fd)
if (memcmp(hdr.e_ident, ELFMAG, SELFMAG) == 0) if (memcmp(hdr.e_ident, ELFMAG, SELFMAG) == 0)
return map_elf(fd, &hdr); return map_elf(fd, &hdr);
/* Otherwise we assume it's a bzImage, and try to unpack it */ /* Otherwise we assume it's a bzImage, and try to load it. */
return load_bzimage(fd); return load_bzimage(fd);
} }
...@@ -433,12 +433,12 @@ static unsigned long load_initrd(const char *name, unsigned long mem) ...@@ -433,12 +433,12 @@ static unsigned long load_initrd(const char *name, unsigned long mem)
return len; return len;
} }
/* Once we know how much memory we have, we can construct simple linear page /* Once we know how much memory we have we can construct simple linear page
* tables which set virtual == physical which will get the Guest far enough * tables which set virtual == physical which will get the Guest far enough
* into the boot to create its own. * into the boot to create its own.
* *
* We lay them out of the way, just below the initrd (which is why we need to * We lay them out of the way, just below the initrd (which is why we need to
* know its size). */ * know its size here). */
static unsigned long setup_pagetables(unsigned long mem, static unsigned long setup_pagetables(unsigned long mem,
unsigned long initrd_size) unsigned long initrd_size)
{ {
...@@ -850,7 +850,8 @@ static void handle_console_output(int fd, struct virtqueue *vq) ...@@ -850,7 +850,8 @@ static void handle_console_output(int fd, struct virtqueue *vq)
* *
* Handling output for network is also simple: we get all the output buffers * Handling output for network is also simple: we get all the output buffers
* and write them (ignoring the first element) to this device's file descriptor * and write them (ignoring the first element) to this device's file descriptor
* (stdout). */ * (/dev/net/tun).
*/
static void handle_net_output(int fd, struct virtqueue *vq) static void handle_net_output(int fd, struct virtqueue *vq)
{ {
unsigned int head, out, in; unsigned int head, out, in;
...@@ -924,7 +925,7 @@ static void enable_fd(int fd, struct virtqueue *vq) ...@@ -924,7 +925,7 @@ static void enable_fd(int fd, struct virtqueue *vq)
write(waker_fd, &vq->dev->fd, sizeof(vq->dev->fd)); write(waker_fd, &vq->dev->fd, sizeof(vq->dev->fd));
} }
/* Resetting a device is fairly easy. */ /* When the Guest asks us to reset a device, it's is fairly easy. */
static void reset_device(struct device *dev) static void reset_device(struct device *dev)
{ {
struct virtqueue *vq; struct virtqueue *vq;
...@@ -1003,8 +1004,8 @@ static void handle_input(int fd) ...@@ -1003,8 +1004,8 @@ static void handle_input(int fd)
if (select(devices.max_infd+1, &fds, NULL, NULL, &poll) == 0) if (select(devices.max_infd+1, &fds, NULL, NULL, &poll) == 0)
break; break;
/* Otherwise, call the device(s) which have readable /* Otherwise, call the device(s) which have readable file
* file descriptors and a method of handling them. */ * descriptors and a method of handling them. */
for (i = devices.dev; i; i = i->next) { for (i = devices.dev; i; i = i->next) {
if (i->handle_input && FD_ISSET(i->fd, &fds)) { if (i->handle_input && FD_ISSET(i->fd, &fds)) {
int dev_fd; int dev_fd;
...@@ -1015,8 +1016,7 @@ static void handle_input(int fd) ...@@ -1015,8 +1016,7 @@ static void handle_input(int fd)
* should no longer service it. Networking and * should no longer service it. Networking and
* console do this when there's no input * console do this when there's no input
* buffers to deliver into. Console also uses * buffers to deliver into. Console also uses
* it when it discovers that stdin is * it when it discovers that stdin is closed. */
* closed. */
FD_CLR(i->fd, &devices.infds); FD_CLR(i->fd, &devices.infds);
/* Tell waker to ignore it too, by sending a /* Tell waker to ignore it too, by sending a
* negative fd number (-1, since 0 is a valid * negative fd number (-1, since 0 is a valid
...@@ -1033,7 +1033,8 @@ static void handle_input(int fd) ...@@ -1033,7 +1033,8 @@ static void handle_input(int fd)
* *
* All devices need a descriptor so the Guest knows it exists, and a "struct * All devices need a descriptor so the Guest knows it exists, and a "struct
* device" so the Launcher can keep track of it. We have common helper * device" so the Launcher can keep track of it. We have common helper
* routines to allocate and manage them. */ * routines to allocate and manage them.
*/
/* The layout of the device page is a "struct lguest_device_desc" followed by a /* The layout of the device page is a "struct lguest_device_desc" followed by a
* number of virtqueue descriptors, then two sets of feature bits, then an * number of virtqueue descriptors, then two sets of feature bits, then an
...@@ -1078,7 +1079,7 @@ static void add_virtqueue(struct device *dev, unsigned int num_descs, ...@@ -1078,7 +1079,7 @@ static void add_virtqueue(struct device *dev, unsigned int num_descs,
struct virtqueue **i, *vq = malloc(sizeof(*vq)); struct virtqueue **i, *vq = malloc(sizeof(*vq));
void *p; void *p;
/* First we need some pages for this virtqueue. */ /* First we need some memory for this virtqueue. */
pages = (vring_size(num_descs, getpagesize()) + getpagesize() - 1) pages = (vring_size(num_descs, getpagesize()) + getpagesize() - 1)
/ getpagesize(); / getpagesize();
p = get_pages(pages); p = get_pages(pages);
...@@ -1122,7 +1123,7 @@ static void add_virtqueue(struct device *dev, unsigned int num_descs, ...@@ -1122,7 +1123,7 @@ static void add_virtqueue(struct device *dev, unsigned int num_descs,
} }
/* The first half of the feature bitmask is for us to advertise features. The /* The first half of the feature bitmask is for us to advertise features. The
* second half if for the Guest to accept features. */ * second half is for the Guest to accept features. */
static void add_feature(struct device *dev, unsigned bit) static void add_feature(struct device *dev, unsigned bit)
{ {
u8 *features = get_feature_bits(dev); u8 *features = get_feature_bits(dev);
...@@ -1151,7 +1152,9 @@ static void set_config(struct device *dev, unsigned len, const void *conf) ...@@ -1151,7 +1152,9 @@ static void set_config(struct device *dev, unsigned len, const void *conf)
} }
/* This routine does all the creation and setup of a new device, including /* This routine does all the creation and setup of a new device, including
* calling new_dev_desc() to allocate the descriptor and device memory. */ * calling new_dev_desc() to allocate the descriptor and device memory.
*
* See what I mean about userspace being boring? */
static struct device *new_device(const char *name, u16 type, int fd, static struct device *new_device(const char *name, u16 type, int fd,
bool (*handle_input)(int, struct device *)) bool (*handle_input)(int, struct device *))
{ {
...@@ -1383,7 +1386,6 @@ struct vblk_info ...@@ -1383,7 +1386,6 @@ struct vblk_info
* Launcher triggers interrupt to Guest. */ * Launcher triggers interrupt to Guest. */
int done_fd; int done_fd;
}; };
/*:*/
/*L:210 /*L:210
* The Disk * The Disk
...@@ -1493,7 +1495,10 @@ static int io_thread(void *_dev) ...@@ -1493,7 +1495,10 @@ static int io_thread(void *_dev)
while (read(vblk->workpipe[0], &c, 1) == 1) { while (read(vblk->workpipe[0], &c, 1) == 1) {
/* We acknowledge each request immediately to reduce latency, /* We acknowledge each request immediately to reduce latency,
* rather than waiting until we've done them all. I haven't * rather than waiting until we've done them all. I haven't
* measured to see if it makes any difference. */ * measured to see if it makes any difference.
*
* That would be an interesting test, wouldn't it? You could
* also try having more than one I/O thread. */
while (service_io(dev)) while (service_io(dev))
write(vblk->done_fd, &c, 1); write(vblk->done_fd, &c, 1);
} }
...@@ -1501,7 +1506,7 @@ static int io_thread(void *_dev) ...@@ -1501,7 +1506,7 @@ static int io_thread(void *_dev)
} }
/* Now we've seen the I/O thread, we return to the Launcher to see what happens /* Now we've seen the I/O thread, we return to the Launcher to see what happens
* when the thread tells us it's completed some I/O. */ * when that thread tells us it's completed some I/O. */
static bool handle_io_finish(int fd, struct device *dev) static bool handle_io_finish(int fd, struct device *dev)
{ {
char c; char c;
...@@ -1573,11 +1578,12 @@ static void setup_block_file(const char *filename) ...@@ -1573,11 +1578,12 @@ static void setup_block_file(const char *filename)
* more work. */ * more work. */
pipe(vblk->workpipe); pipe(vblk->workpipe);
/* Create stack for thread and run it */ /* Create stack for thread and run it. Since stack grows upwards, we
* point the stack pointer to the end of this region. */
stack = malloc(32768); stack = malloc(32768);
/* SIGCHLD - We dont "wait" for our cloned thread, so prevent it from /* SIGCHLD - We dont "wait" for our cloned thread, so prevent it from
* becoming a zombie. */ * becoming a zombie. */
if (clone(io_thread, stack + 32768, CLONE_VM | SIGCHLD, dev) == -1) if (clone(io_thread, stack + 32768, CLONE_VM | SIGCHLD, dev) == -1)
err(1, "Creating clone"); err(1, "Creating clone");
/* We don't need to keep the I/O thread's end of the pipes open. */ /* We don't need to keep the I/O thread's end of the pipes open. */
...@@ -1587,14 +1593,14 @@ static void setup_block_file(const char *filename) ...@@ -1587,14 +1593,14 @@ static void setup_block_file(const char *filename)
verbose("device %u: virtblock %llu sectors\n", verbose("device %u: virtblock %llu sectors\n",
devices.device_num, le64_to_cpu(conf.capacity)); devices.device_num, le64_to_cpu(conf.capacity));
} }
/* That's the end of device setup. :*/ /* That's the end of device setup. */
/* Reboot */ /*L:230 Reboot is pretty easy: clean up and exec() the Launcher afresh. */
static void __attribute__((noreturn)) restart_guest(void) static void __attribute__((noreturn)) restart_guest(void)
{ {
unsigned int i; unsigned int i;
/* Closing pipes causes the waker thread and io_threads to die, and /* Closing pipes causes the Waker thread and io_threads to die, and
* closing /dev/lguest cleans up the Guest. Since we don't track all * closing /dev/lguest cleans up the Guest. Since we don't track all
* open fds, we simply close everything beyond stderr. */ * open fds, we simply close everything beyond stderr. */
for (i = 3; i < FD_SETSIZE; i++) for (i = 3; i < FD_SETSIZE; i++)
...@@ -1603,7 +1609,7 @@ static void __attribute__((noreturn)) restart_guest(void) ...@@ -1603,7 +1609,7 @@ static void __attribute__((noreturn)) restart_guest(void)
err(1, "Could not exec %s", main_args[0]); err(1, "Could not exec %s", main_args[0]);
} }
/*L:220 Finally we reach the core of the Launcher, which runs the Guest, serves /*L:220 Finally we reach the core of the Launcher which runs the Guest, serves
* its input and output, and finally, lays it to rest. */ * its input and output, and finally, lays it to rest. */
static void __attribute__((noreturn)) run_guest(int lguest_fd) static void __attribute__((noreturn)) run_guest(int lguest_fd)
{ {
...@@ -1644,7 +1650,7 @@ static void __attribute__((noreturn)) run_guest(int lguest_fd) ...@@ -1644,7 +1650,7 @@ static void __attribute__((noreturn)) run_guest(int lguest_fd)
err(1, "Resetting break"); err(1, "Resetting break");
} }
} }
/* /*L:240
* This is the end of the Launcher. The good news: we are over halfway * This is the end of the Launcher. The good news: we are over halfway
* through! The bad news: the most fiendish part of the code still lies ahead * through! The bad news: the most fiendish part of the code still lies ahead
* of us. * of us.
...@@ -1691,8 +1697,8 @@ int main(int argc, char *argv[]) ...@@ -1691,8 +1697,8 @@ int main(int argc, char *argv[])
* device receive input from a file descriptor, we keep an fdset * device receive input from a file descriptor, we keep an fdset
* (infds) and the maximum fd number (max_infd) with the head of the * (infds) and the maximum fd number (max_infd) with the head of the
* list. We also keep a pointer to the last device. Finally, we keep * list. We also keep a pointer to the last device. Finally, we keep
* the next interrupt number to hand out (1: remember that 0 is used by * the next interrupt number to use for devices (1: remember that 0 is
* the timer). */ * used by the timer). */
FD_ZERO(&devices.infds); FD_ZERO(&devices.infds);
devices.max_infd = -1; devices.max_infd = -1;
devices.lastdev = NULL; devices.lastdev = NULL;
...@@ -1793,8 +1799,8 @@ int main(int argc, char *argv[]) ...@@ -1793,8 +1799,8 @@ int main(int argc, char *argv[])
lguest_fd = tell_kernel(pgdir, start); lguest_fd = tell_kernel(pgdir, start);
/* We fork off a child process, which wakes the Launcher whenever one /* We fork off a child process, which wakes the Launcher whenever one
* of the input file descriptors needs attention. Otherwise we would * of the input file descriptors needs attention. We call this the
* run the Guest until it tries to output something. */ * Waker, and we'll cover it in a moment. */
waker_fd = setup_waker(lguest_fd); waker_fd = setup_waker(lguest_fd);
/* Finally, run the Guest. This doesn't return. */ /* Finally, run the Guest. This doesn't return. */
......
Rusty's Remarkably Unreliable Guide to Lguest __
- or, A Young Coder's Illustrated Hypervisor (___()'`; Rusty's Remarkably Unreliable Guide to Lguest
http://lguest.ozlabs.org /, /` - or, A Young Coder's Illustrated Hypervisor
\\"--\\ http://lguest.ozlabs.org
Lguest is designed to be a minimal hypervisor for the Linux kernel, for Lguest is designed to be a minimal hypervisor for the Linux kernel, for
Linux developers and users to experiment with virtualization with the Linux developers and users to experiment with virtualization with the
...@@ -41,12 +42,16 @@ Running Lguest: ...@@ -41,12 +42,16 @@ Running Lguest:
CONFIG_PHYSICAL_ALIGN=0x100000) CONFIG_PHYSICAL_ALIGN=0x100000)
"Device Drivers": "Device Drivers":
"Block devices"
"Virtio block driver (EXPERIMENTAL)" = M/Y
"Network device support" "Network device support"
"Universal TUN/TAP device driver support" = M/Y "Universal TUN/TAP device driver support" = M/Y
(CONFIG_TUN=m) "Virtio network driver (EXPERIMENTAL)" = M/Y
"Virtualization" (CONFIG_VIRTIO_BLK=m, CONFIG_VIRTIO_NET=m and CONFIG_TUN=m)
"Linux hypervisor example code" = M/Y
(CONFIG_LGUEST=m) "Virtualization"
"Linux hypervisor example code" = M/Y
(CONFIG_LGUEST=m)
- A tool called "lguest" is available in this directory: type "make" - A tool called "lguest" is available in this directory: type "make"
to build it. If you didn't build your kernel in-tree, use "make to build it. If you didn't build your kernel in-tree, use "make
......
...@@ -84,9 +84,6 @@ policy-routing.txt ...@@ -84,9 +84,6 @@ policy-routing.txt
- IP policy-based routing - IP policy-based routing
ray_cs.txt ray_cs.txt
- Raylink Wireless LAN card driver info. - Raylink Wireless LAN card driver info.
sk98lin.txt
- Marvell Yukon Chipset / SysKonnect SK-98xx compliant Gigabit
Ethernet Adapter family driver info
skfp.txt skfp.txt
- SysKonnect FDDI (SK-5xxx, Compaq Netelligent) driver info. - SysKonnect FDDI (SK-5xxx, Compaq Netelligent) driver info.
smc9.txt smc9.txt
......
此差异已折叠。
...@@ -23,8 +23,7 @@ kernel debugging options, such as Kernel Stack Meter or Kernel Tracer, ...@@ -23,8 +23,7 @@ kernel debugging options, such as Kernel Stack Meter or Kernel Tracer,
may implicitly disable the NMI watchdog.] may implicitly disable the NMI watchdog.]
For x86-64, the needed APIC is always compiled in, and the NMI watchdog is For x86-64, the needed APIC is always compiled in, and the NMI watchdog is
always enabled with I/O-APIC mode (nmi_watchdog=1). Currently, local APIC always enabled with I/O-APIC mode (nmi_watchdog=1).
mode (nmi_watchdog=2) does not work on x86-64.
Using local APIC (nmi_watchdog=2) needs the first performance register, so Using local APIC (nmi_watchdog=2) needs the first performance register, so
you can't use it for other purposes (such as high precision performance you can't use it for other purposes (such as high precision performance
......
...@@ -12,5 +12,7 @@ sched-domains.txt ...@@ -12,5 +12,7 @@ sched-domains.txt
- information on scheduling domains. - information on scheduling domains.
sched-nice-design.txt sched-nice-design.txt
- How and why the scheduler's nice levels are implemented. - How and why the scheduler's nice levels are implemented.
sched-rt-group.txt
- real-time group scheduling.
sched-stats.txt sched-stats.txt
- information on schedstats (Linux Scheduler Statistics). - information on schedstats (Linux Scheduler Statistics).
...@@ -116,6 +116,13 @@ low order bit. So when a chip's timing diagram shows the clock ...@@ -116,6 +116,13 @@ low order bit. So when a chip's timing diagram shows the clock
starting low (CPOL=0) and data stabilized for sampling during the starting low (CPOL=0) and data stabilized for sampling during the
trailing clock edge (CPHA=1), that's SPI mode 1. trailing clock edge (CPHA=1), that's SPI mode 1.
Note that the clock mode is relevant as soon as the chipselect goes
active. So the master must set the clock to inactive before selecting
a slave, and the slave can tell the chosen polarity by sampling the
clock level when its select line goes active. That's why many devices
support for example both modes 0 and 3: they don't care about polarity,
and alway clock data in/out on rising clock edges.
How do these driver programming interfaces work? How do these driver programming interfaces work?
------------------------------------------------ ------------------------------------------------
...@@ -379,8 +386,14 @@ any more such messages. ...@@ -379,8 +386,14 @@ any more such messages.
+ when bidirectional reads and writes start ... by how its + when bidirectional reads and writes start ... by how its
sequence of spi_transfer requests is arranged; sequence of spi_transfer requests is arranged;
+ which I/O buffers are used ... each spi_transfer wraps a
buffer for each transfer direction, supporting full duplex
(two pointers, maybe the same one in both cases) and half
duplex (one pointer is NULL) transfers;
+ optionally defining short delays after transfers ... using + optionally defining short delays after transfers ... using
the spi_transfer.delay_usecs setting; the spi_transfer.delay_usecs setting (this delay can be the
only protocol effect, if the buffer length is zero);
+ whether the chipselect becomes inactive after a transfer and + whether the chipselect becomes inactive after a transfer and
any delay ... by using the spi_transfer.cs_change flag; any delay ... by using the spi_transfer.cs_change flag;
......
...@@ -5,6 +5,28 @@ Please use DEFINE_SPINLOCK()/DEFINE_RWLOCK() or ...@@ -5,6 +5,28 @@ Please use DEFINE_SPINLOCK()/DEFINE_RWLOCK() or
__SPIN_LOCK_UNLOCKED()/__RW_LOCK_UNLOCKED() as appropriate for static __SPIN_LOCK_UNLOCKED()/__RW_LOCK_UNLOCKED() as appropriate for static
initialization. initialization.
Most of the time, you can simply turn:
static spinlock_t xxx_lock = SPIN_LOCK_UNLOCKED;
into:
static DEFINE_SPINLOCK(xxx_lock);
Static structure member variables go from:
struct foo bar {
.lock = SPIN_LOCK_UNLOCKED;
};
to:
struct foo bar {
.lock = __SPIN_LOCK_UNLOCKED(bar.lock);
};
Declaration of static rw_locks undergo a similar transformation.
Dynamic initialization, when necessary, may be performed as Dynamic initialization, when necessary, may be performed as
demonstrated below. demonstrated below.
......
...@@ -57,7 +57,7 @@ here; a summary of the common scenarios is presented below: ...@@ -57,7 +57,7 @@ here; a summary of the common scenarios is presented below:
unaligned access to be corrected. unaligned access to be corrected.
- Some architectures are not capable of unaligned memory access, but will - Some architectures are not capable of unaligned memory access, but will
silently perform a different memory access to the one that was requested, silently perform a different memory access to the one that was requested,
resulting a a subtle code bug that is hard to detect! resulting in a subtle code bug that is hard to detect!
It should be obvious from the above that if your code causes unaligned It should be obvious from the above that if your code causes unaligned
memory accesses to happen, your code will not work correctly on certain memory accesses to happen, your code will not work correctly on certain
...@@ -209,7 +209,7 @@ memory and you wish to avoid unaligned access, its usage is as follows: ...@@ -209,7 +209,7 @@ memory and you wish to avoid unaligned access, its usage is as follows:
u32 value = get_unaligned((u32 *) data); u32 value = get_unaligned((u32 *) data);
These macros work work for memory accesses of any length (not just 32 bits as These macros work for memory accesses of any length (not just 32 bits as
in the examples above). Be aware that when compared to standard access of in the examples above). Be aware that when compared to standard access of
aligned memory, using these macros to access unaligned memory can be costly in aligned memory, using these macros to access unaligned memory can be costly in
terms of performance. terms of performance.
......
...@@ -163,6 +163,12 @@ M: A2232@gmx.net ...@@ -163,6 +163,12 @@ M: A2232@gmx.net
L: linux-m68k@lists.linux-m68k.org L: linux-m68k@lists.linux-m68k.org
S: Maintained S: Maintained
AFS FILESYSTEM & AF_RXRPC SOCKET DOMAIN
P: David Howells
M: dhowells@redhat.com
L: linux-afs@lists.infradead.org
S: Supported
AIO AIO
P: Benjamin LaHaise P: Benjamin LaHaise
M: bcrl@kvack.org M: bcrl@kvack.org
...@@ -2110,7 +2116,7 @@ M: reinette.chatre@intel.com ...@@ -2110,7 +2116,7 @@ M: reinette.chatre@intel.com
L: linux-wireless@vger.kernel.org L: linux-wireless@vger.kernel.org
L: ipw3945-devel@lists.sourceforge.net L: ipw3945-devel@lists.sourceforge.net
W: http://intellinuxwireless.org W: http://intellinuxwireless.org
T: git git://intellinuxwireless.org/repos/iwlwifi T: git git://git.kernel.org/pub/scm/linux/kernel/git/rchatre/iwlwifi-2.6.git
S: Supported S: Supported
IOC3 ETHERNET DRIVER IOC3 ETHERNET DRIVER
...@@ -2314,14 +2320,14 @@ L: kexec@lists.infradead.org ...@@ -2314,14 +2320,14 @@ L: kexec@lists.infradead.org
S: Maintained S: Maintained
KPROBES KPROBES
P: Prasanna S Panchamukhi
M: prasanna@in.ibm.com
P: Ananth N Mavinakayanahalli P: Ananth N Mavinakayanahalli
M: ananth@in.ibm.com M: ananth@in.ibm.com
P: Anil S Keshavamurthy P: Anil S Keshavamurthy
M: anil.s.keshavamurthy@intel.com M: anil.s.keshavamurthy@intel.com
P: David S. Miller P: David S. Miller
M: davem@davemloft.net M: davem@davemloft.net
P: Masami Hiramatsu
M: mhiramat@redhat.com
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
S: Maintained S: Maintained
......
VERSION = 2 VERSION = 2
PATCHLEVEL = 6 PATCHLEVEL = 6
SUBLEVEL = 25 SUBLEVEL = 25
EXTRAVERSION = -rc6 EXTRAVERSION = -rc9
NAME = Funky Weasel is Jiggy wit it NAME = Funky Weasel is Jiggy wit it
# *DOCUMENTATION* # *DOCUMENTATION*
......
...@@ -424,11 +424,13 @@ EXPORT_SYMBOL(pci_unmap_page); ...@@ -424,11 +424,13 @@ EXPORT_SYMBOL(pci_unmap_page);
else DMA_ADDRP is undefined. */ else DMA_ADDRP is undefined. */
void * void *
pci_alloc_consistent(struct pci_dev *pdev, size_t size, dma_addr_t *dma_addrp) __pci_alloc_consistent(struct pci_dev *pdev, size_t size,
dma_addr_t *dma_addrp, gfp_t gfp)
{ {
void *cpu_addr; void *cpu_addr;
long order = get_order(size); long order = get_order(size);
gfp_t gfp = GFP_ATOMIC;
gfp &= ~GFP_DMA;
try_again: try_again:
cpu_addr = (void *)__get_free_pages(gfp, order); cpu_addr = (void *)__get_free_pages(gfp, order);
...@@ -458,7 +460,7 @@ pci_alloc_consistent(struct pci_dev *pdev, size_t size, dma_addr_t *dma_addrp) ...@@ -458,7 +460,7 @@ pci_alloc_consistent(struct pci_dev *pdev, size_t size, dma_addr_t *dma_addrp)
return cpu_addr; return cpu_addr;
} }
EXPORT_SYMBOL(pci_alloc_consistent); EXPORT_SYMBOL(__pci_alloc_consistent);
/* Free and unmap a consistent DMA buffer. CPU_ADDR and DMA_ADDR must /* Free and unmap a consistent DMA buffer. CPU_ADDR and DMA_ADDR must
be values that were returned from pci_alloc_consistent. SIZE must be values that were returned from pci_alloc_consistent. SIZE must
......
...@@ -120,6 +120,7 @@ void it8152_irq_demux(unsigned int irq, struct irq_desc *desc) ...@@ -120,6 +120,7 @@ void it8152_irq_demux(unsigned int irq, struct irq_desc *desc)
time, when they all three were 0. */ time, when they all three were 0. */
bits_pd = __raw_readl(IT8152_INTC_PDCNIRR); bits_pd = __raw_readl(IT8152_INTC_PDCNIRR);
bits_lp = __raw_readl(IT8152_INTC_LPCNIRR); bits_lp = __raw_readl(IT8152_INTC_LPCNIRR);
bits_ld = __raw_readl(IT8152_INTC_LDCNIRR);
if (!(bits_ld | bits_lp | bits_pd)) if (!(bits_ld | bits_lp | bits_pd))
return; return;
} }
...@@ -133,14 +134,14 @@ void it8152_irq_demux(unsigned int irq, struct irq_desc *desc) ...@@ -133,14 +134,14 @@ void it8152_irq_demux(unsigned int irq, struct irq_desc *desc)
bits_lp &= ((1 << IT8152_LP_IRQ_COUNT) - 1); bits_lp &= ((1 << IT8152_LP_IRQ_COUNT) - 1);
while (bits_lp) { while (bits_lp) {
i = __ffs(bits_pd); i = __ffs(bits_lp);
it8152_irq(IT8152_LP_IRQ(i)); it8152_irq(IT8152_LP_IRQ(i));
bits_lp &= ~(1 << i); bits_lp &= ~(1 << i);
} }
bits_ld &= ((1 << IT8152_LD_IRQ_COUNT) - 1); bits_ld &= ((1 << IT8152_LD_IRQ_COUNT) - 1);
while (bits_ld) { while (bits_ld) {
i = __ffs(bits_pd); i = __ffs(bits_ld);
it8152_irq(IT8152_LD_IRQ(i)); it8152_irq(IT8152_LD_IRQ(i));
bits_ld &= ~(1 << i); bits_ld &= ~(1 << i);
} }
......
...@@ -336,7 +336,7 @@ ...@@ -336,7 +336,7 @@
CALL(sys_mknodat) CALL(sys_mknodat)
/* 325 */ CALL(sys_fchownat) /* 325 */ CALL(sys_fchownat)
CALL(sys_futimesat) CALL(sys_futimesat)
CALL(sys_fstatat64) CALL(ABI(sys_fstatat64, sys_oabi_fstatat64))
CALL(sys_unlinkat) CALL(sys_unlinkat)
CALL(sys_renameat) CALL(sys_renameat)
/* 330 */ CALL(sys_linkat) /* 330 */ CALL(sys_linkat)
......
...@@ -25,6 +25,7 @@ ...@@ -25,6 +25,7 @@
* sys_stat64: * sys_stat64:
* sys_lstat64: * sys_lstat64:
* sys_fstat64: * sys_fstat64:
* sys_fstatat64:
* *
* struct stat64 has different sizes and some members are shifted * struct stat64 has different sizes and some members are shifted
* Compatibility wrappers are needed for them and provided below. * Compatibility wrappers are needed for them and provided below.
...@@ -169,6 +170,29 @@ asmlinkage long sys_oabi_fstat64(unsigned long fd, ...@@ -169,6 +170,29 @@ asmlinkage long sys_oabi_fstat64(unsigned long fd,
return error; return error;
} }
asmlinkage long sys_oabi_fstatat64(int dfd,
char __user *filename,
struct oldabi_stat64 __user *statbuf,
int flag)
{
struct kstat stat;
int error = -EINVAL;
if ((flag & ~AT_SYMLINK_NOFOLLOW) != 0)
goto out;
if (flag & AT_SYMLINK_NOFOLLOW)
error = vfs_lstat_fd(dfd, filename, &stat);
else
error = vfs_stat_fd(dfd, filename, &stat);
if (!error)
error = cp_oldabi_stat64(&stat, statbuf);
out:
return error;
}
struct oabi_flock64 { struct oabi_flock64 {
short l_type; short l_type;
short l_whence; short l_whence;
......
...@@ -163,6 +163,7 @@ add_reserved_region(resource_size_t start, resource_size_t end, ...@@ -163,6 +163,7 @@ add_reserved_region(resource_size_t start, resource_size_t end,
new->start = start; new->start = start;
new->end = end; new->end = end;
new->name = name; new->name = name;
new->sibling = next;
new->flags = IORESOURCE_MEM; new->flags = IORESOURCE_MEM;
*pprev = new; *pprev = new;
......
...@@ -178,6 +178,7 @@ static int do_cop_absent(u32 insn) ...@@ -178,6 +178,7 @@ static int do_cop_absent(u32 insn)
return 0; return 0;
} }
#ifdef CONFIG_BUG
int is_valid_bugaddr(unsigned long pc) int is_valid_bugaddr(unsigned long pc)
{ {
unsigned short opcode; unsigned short opcode;
...@@ -189,6 +190,7 @@ int is_valid_bugaddr(unsigned long pc) ...@@ -189,6 +190,7 @@ int is_valid_bugaddr(unsigned long pc)
return opcode == AVR32_BUG_OPCODE; return opcode == AVR32_BUG_OPCODE;
} }
#endif
asmlinkage void do_illegal_opcode(unsigned long ecr, struct pt_regs *regs) asmlinkage void do_illegal_opcode(unsigned long ecr, struct pt_regs *regs)
{ {
...@@ -197,6 +199,7 @@ asmlinkage void do_illegal_opcode(unsigned long ecr, struct pt_regs *regs) ...@@ -197,6 +199,7 @@ asmlinkage void do_illegal_opcode(unsigned long ecr, struct pt_regs *regs)
void __user *pc; void __user *pc;
long code; long code;
#ifdef CONFIG_BUG
if (!user_mode(regs) && (ecr == ECR_ILLEGAL_OPCODE)) { if (!user_mode(regs) && (ecr == ECR_ILLEGAL_OPCODE)) {
enum bug_trap_type type; enum bug_trap_type type;
...@@ -211,6 +214,7 @@ asmlinkage void do_illegal_opcode(unsigned long ecr, struct pt_regs *regs) ...@@ -211,6 +214,7 @@ asmlinkage void do_illegal_opcode(unsigned long ecr, struct pt_regs *regs)
die("Kernel BUG", regs, SIGKILL); die("Kernel BUG", regs, SIGKILL);
} }
} }
#endif
local_irq_enable(); local_irq_enable();
......
...@@ -316,8 +316,14 @@ __trap_fixup_kernel_data_tlb_miss: ...@@ -316,8 +316,14 @@ __trap_fixup_kernel_data_tlb_miss:
.section .trap.vector .section .trap.vector
.org TBR_TT_TRAP0 >> 2 .org TBR_TT_TRAP0 >> 2
.long system_call .long system_call
.rept 126 .rept 119
.long __entry_unsupported_trap .long __entry_unsupported_trap
.endr .endr
# userspace atomic op emulation, traps 120-126
.rept 7
.long __entry_atomic_op
.endr
.org TBR_TT_BREAK >> 2 .org TBR_TT_BREAK >> 2
.long __entry_debug_exception .long __entry_debug_exception
...@@ -654,6 +654,26 @@ __entry_debug_exception: ...@@ -654,6 +654,26 @@ __entry_debug_exception:
movgs gr4,psr movgs gr4,psr
jmpl @(gr5,gr0) ; call ill_insn(esfr1,epcr0,esr0) jmpl @(gr5,gr0) ; call ill_insn(esfr1,epcr0,esr0)
###############################################################################
#
# handle atomic operation emulation for userspace
#
###############################################################################
.globl __entry_atomic_op
__entry_atomic_op:
LEDS 0x6012
sethi.p %hi(atomic_operation),gr5
setlo %lo(atomic_operation),gr5
movsg esfr1,gr8
movsg epcr0,gr9
movsg esr0,gr10
# now that we've accessed the exception regs, we can enable exceptions
movsg psr,gr4
ori gr4,#PSR_ET,gr4
movgs gr4,psr
jmpl @(gr5,gr0) ; call atomic_operation(esfr1,epcr0,esr0)
############################################################################### ###############################################################################
# #
# handle media exception # handle media exception
......
...@@ -46,5 +46,5 @@ ...@@ -46,5 +46,5 @@
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
__sdram_base = 0x00000000 /* base address to which SDRAM relocated */ __sdram_base = 0x00000000 /* base address to which SDRAM relocated */
#else #else
__sdram_base = 0xc0000000 /* base address to which SDRAM relocated */ __sdram_base = __page_offset /* base address to which SDRAM relocated */
#endif #endif
...@@ -102,13 +102,6 @@ __switch_to: ...@@ -102,13 +102,6 @@ __switch_to:
movgs gr14,lr movgs gr14,lr
bar bar
srli gr15,#28,gr5
subicc gr5,#0xc,gr0,icc0
beq icc0,#0,111f
break
nop
111:
# jump to __switch_back or ret_from_fork as appropriate # jump to __switch_back or ret_from_fork as appropriate
# - move prev to GR8 # - move prev to GR8
movgs gr4,psr movgs gr4,psr
......
...@@ -100,6 +100,233 @@ asmlinkage void illegal_instruction(unsigned long esfr1, unsigned long epcr0, un ...@@ -100,6 +100,233 @@ asmlinkage void illegal_instruction(unsigned long esfr1, unsigned long epcr0, un
force_sig_info(info.si_signo, &info, current); force_sig_info(info.si_signo, &info, current);
} /* end illegal_instruction() */ } /* end illegal_instruction() */
/*****************************************************************************/
/*
* handle atomic operations with errors
* - arguments in gr8, gr9, gr10
* - original memory value placed in gr5
* - replacement memory value placed in gr9
*/
asmlinkage void atomic_operation(unsigned long esfr1, unsigned long epcr0,
unsigned long esr0)
{
static DEFINE_SPINLOCK(atomic_op_lock);
unsigned long x, y, z, *p;
mm_segment_t oldfs;
siginfo_t info;
int ret;
y = 0;
z = 0;
oldfs = get_fs();
if (!user_mode(__frame))
set_fs(KERNEL_DS);
switch (__frame->tbr & TBR_TT) {
/* TIRA gr0,#120
* u32 __atomic_user_cmpxchg32(u32 *ptr, u32 test, u32 new)
*/
case TBR_TT_ATOMIC_CMPXCHG32:
p = (unsigned long *) __frame->gr8;
x = __frame->gr9;
y = __frame->gr10;
for (;;) {
ret = get_user(z, p);
if (ret < 0)
goto error;
if (z != x)
goto done;
spin_lock_irq(&atomic_op_lock);
if (__get_user(z, p) == 0) {
if (z != x)
goto done2;
if (__put_user(y, p) == 0)
goto done2;
goto error2;
}
spin_unlock_irq(&atomic_op_lock);
}
/* TIRA gr0,#121
* u32 __atomic_kernel_xchg32(void *v, u32 new)
*/
case TBR_TT_ATOMIC_XCHG32:
p = (unsigned long *) __frame->gr8;
y = __frame->gr9;
for (;;) {
ret = get_user(z, p);
if (ret < 0)
goto error;
spin_lock_irq(&atomic_op_lock);
if (__get_user(z, p) == 0) {
if (__put_user(y, p) == 0)
goto done2;
goto error2;
}
spin_unlock_irq(&atomic_op_lock);
}
/* TIRA gr0,#122
* ulong __atomic_kernel_XOR_return(ulong i, ulong *v)
*/
case TBR_TT_ATOMIC_XOR:
p = (unsigned long *) __frame->gr8;
x = __frame->gr9;
for (;;) {
ret = get_user(z, p);
if (ret < 0)
goto error;
spin_lock_irq(&atomic_op_lock);
if (__get_user(z, p) == 0) {
y = x ^ z;
if (__put_user(y, p) == 0)
goto done2;
goto error2;
}
spin_unlock_irq(&atomic_op_lock);
}
/* TIRA gr0,#123
* ulong __atomic_kernel_OR_return(ulong i, ulong *v)
*/
case TBR_TT_ATOMIC_OR:
p = (unsigned long *) __frame->gr8;
x = __frame->gr9;
for (;;) {
ret = get_user(z, p);
if (ret < 0)
goto error;
spin_lock_irq(&atomic_op_lock);
if (__get_user(z, p) == 0) {
y = x ^ z;
if (__put_user(y, p) == 0)
goto done2;
goto error2;
}
spin_unlock_irq(&atomic_op_lock);
}
/* TIRA gr0,#124
* ulong __atomic_kernel_AND_return(ulong i, ulong *v)
*/
case TBR_TT_ATOMIC_AND:
p = (unsigned long *) __frame->gr8;
x = __frame->gr9;
for (;;) {
ret = get_user(z, p);
if (ret < 0)
goto error;
spin_lock_irq(&atomic_op_lock);
if (__get_user(z, p) == 0) {
y = x & z;
if (__put_user(y, p) == 0)
goto done2;
goto error2;
}
spin_unlock_irq(&atomic_op_lock);
}
/* TIRA gr0,#125
* int __atomic_user_sub_return(atomic_t *v, int i)
*/
case TBR_TT_ATOMIC_SUB:
p = (unsigned long *) __frame->gr8;
x = __frame->gr9;
for (;;) {
ret = get_user(z, p);
if (ret < 0)
goto error;
spin_lock_irq(&atomic_op_lock);
if (__get_user(z, p) == 0) {
y = z - x;
if (__put_user(y, p) == 0)
goto done2;
goto error2;
}
spin_unlock_irq(&atomic_op_lock);
}
/* TIRA gr0,#126
* int __atomic_user_add_return(atomic_t *v, int i)
*/
case TBR_TT_ATOMIC_ADD:
p = (unsigned long *) __frame->gr8;
x = __frame->gr9;
for (;;) {
ret = get_user(z, p);
if (ret < 0)
goto error;
spin_lock_irq(&atomic_op_lock);
if (__get_user(z, p) == 0) {
y = z + x;
if (__put_user(y, p) == 0)
goto done2;
goto error2;
}
spin_unlock_irq(&atomic_op_lock);
}
default:
BUG();
}
done2:
spin_unlock_irq(&atomic_op_lock);
done:
if (!user_mode(__frame))
set_fs(oldfs);
__frame->gr5 = z;
__frame->gr9 = y;
return;
error2:
spin_unlock_irq(&atomic_op_lock);
error:
if (!user_mode(__frame))
set_fs(oldfs);
__frame->pc -= 4;
die_if_kernel("-- Atomic Op Error --\n");
info.si_signo = SIGSEGV;
info.si_code = SEGV_ACCERR;
info.si_errno = 0;
info.si_addr = (void *) __frame->pc;
force_sig_info(info.si_signo, &info, current);
}
/*****************************************************************************/ /*****************************************************************************/
/* /*
* *
......
...@@ -13,6 +13,8 @@ ...@@ -13,6 +13,8 @@
# Copyright (C) 1994 by Hamish Macdonald # Copyright (C) 1994 by Hamish Macdonald
# #
KBUILD_DEFCONFIG := amiga_defconfig
# override top level makefile # override top level makefile
AS += -m68020 AS += -m68020
LDFLAGS := -m m68kelf LDFLAGS := -m m68kelf
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -482,10 +482,13 @@ endif ...@@ -482,10 +482,13 @@ endif
# be 16kb aligned or the handling of the current variable will break. # be 16kb aligned or the handling of the current variable will break.
# Simplified: what IP22 does at 128MB+ in ksegN, IP28 does at 512MB+ in xkphys # Simplified: what IP22 does at 128MB+ in ksegN, IP28 does at 512MB+ in xkphys
# #
#core-$(CONFIG_SGI_IP28) += arch/mips/sgi-ip22/ arch/mips/arc/arc_con.o ifdef CONFIG_SGI_IP28
ifeq ($(call cc-option-yn,-mr10k-cache-barrier=1), n)
$(error gcc doesn't support needed option -mr10k-cache-barrier=1)
endif
endif
core-$(CONFIG_SGI_IP28) += arch/mips/sgi-ip22/ core-$(CONFIG_SGI_IP28) += arch/mips/sgi-ip22/
cflags-$(CONFIG_SGI_IP28) += -mr10k-cache-barrier=1 -Iinclude/asm-mips/mach-ip28 cflags-$(CONFIG_SGI_IP28) += -mr10k-cache-barrier=1 -Iinclude/asm-mips/mach-ip28
#cflags-$(CONFIG_SGI_IP28) += -Iinclude/asm-mips/mach-ip28
load-$(CONFIG_SGI_IP28) += 0xa800000020004000 load-$(CONFIG_SGI_IP28) += 0xa800000020004000
# #
......
...@@ -22,24 +22,24 @@ struct cpu_spec* cur_cpu_spec[NR_CPUS]; ...@@ -22,24 +22,24 @@ struct cpu_spec* cur_cpu_spec[NR_CPUS];
/* With some thought, we can probably use the mask to reduce the /* With some thought, we can probably use the mask to reduce the
* size of the table. * size of the table.
*/ */
struct cpu_spec cpu_specs[] = { struct cpu_spec cpu_specs[] = {
{ 0xffffffff, 0x00030100, "Au1000 DA", 1, 0 }, { 0xffffffff, 0x00030100, "Au1000 DA", 1, 0, 1 },
{ 0xffffffff, 0x00030201, "Au1000 HA", 1, 0 }, { 0xffffffff, 0x00030201, "Au1000 HA", 1, 0, 1 },
{ 0xffffffff, 0x00030202, "Au1000 HB", 1, 0 }, { 0xffffffff, 0x00030202, "Au1000 HB", 1, 0, 1 },
{ 0xffffffff, 0x00030203, "Au1000 HC", 1, 1 }, { 0xffffffff, 0x00030203, "Au1000 HC", 1, 1, 0 },
{ 0xffffffff, 0x00030204, "Au1000 HD", 1, 1 }, { 0xffffffff, 0x00030204, "Au1000 HD", 1, 1, 0 },
{ 0xffffffff, 0x01030200, "Au1500 AB", 1, 1 }, { 0xffffffff, 0x01030200, "Au1500 AB", 1, 1, 0 },
{ 0xffffffff, 0x01030201, "Au1500 AC", 0, 1 }, { 0xffffffff, 0x01030201, "Au1500 AC", 0, 1, 0 },
{ 0xffffffff, 0x01030202, "Au1500 AD", 0, 1 }, { 0xffffffff, 0x01030202, "Au1500 AD", 0, 1, 0 },
{ 0xffffffff, 0x02030200, "Au1100 AB", 1, 1 }, { 0xffffffff, 0x02030200, "Au1100 AB", 1, 1, 0 },
{ 0xffffffff, 0x02030201, "Au1100 BA", 1, 1 }, { 0xffffffff, 0x02030201, "Au1100 BA", 1, 1, 0 },
{ 0xffffffff, 0x02030202, "Au1100 BC", 1, 1 }, { 0xffffffff, 0x02030202, "Au1100 BC", 1, 1, 0 },
{ 0xffffffff, 0x02030203, "Au1100 BD", 0, 1 }, { 0xffffffff, 0x02030203, "Au1100 BD", 0, 1, 0 },
{ 0xffffffff, 0x02030204, "Au1100 BE", 0, 1 }, { 0xffffffff, 0x02030204, "Au1100 BE", 0, 1, 0 },
{ 0xffffffff, 0x03030200, "Au1550 AA", 0, 1 }, { 0xffffffff, 0x03030200, "Au1550 AA", 0, 1, 0 },
{ 0xffffffff, 0x04030200, "Au1200 AB", 0, 0 }, { 0xffffffff, 0x04030200, "Au1200 AB", 0, 0, 0 },
{ 0xffffffff, 0x04030201, "Au1200 AC", 1, 0 }, { 0xffffffff, 0x04030201, "Au1200 AC", 1, 0, 0 },
{ 0x00000000, 0x00000000, "Unknown Au1xxx", 1, 0 }, { 0x00000000, 0x00000000, "Unknown Au1xxx", 1, 0, 0 }
}; };
void void
......
...@@ -57,7 +57,7 @@ void __init plat_mem_setup(void) ...@@ -57,7 +57,7 @@ void __init plat_mem_setup(void)
{ {
struct cpu_spec *sp; struct cpu_spec *sp;
char *argptr; char *argptr;
unsigned long prid, cpupll, bclk = 1; unsigned long prid, cpufreq, bclk = 1;
set_cpuspec(); set_cpuspec();
sp = cur_cpu_spec[0]; sp = cur_cpu_spec[0];
...@@ -65,8 +65,15 @@ void __init plat_mem_setup(void) ...@@ -65,8 +65,15 @@ void __init plat_mem_setup(void)
board_setup(); /* board specific setup */ board_setup(); /* board specific setup */
prid = read_c0_prid(); prid = read_c0_prid();
cpupll = (au_readl(0xB1900060) & 0x3F) * 12; if (sp->cpu_pll_wo)
printk("(PRId %08lx) @ %ldMHZ\n", prid, cpupll); #ifdef CONFIG_SOC_AU1000_FREQUENCY
cpufreq = CONFIG_SOC_AU1000_FREQUENCY / 1000000;
#else
cpufreq = 396;
#endif
else
cpufreq = (au_readl(SYS_CPUPLL) & 0x3F) * 12;
printk(KERN_INFO "(PRID %08lx) @ %ld MHz\n", prid, cpufreq);
bclk = sp->cpu_bclk; bclk = sp->cpu_bclk;
if (bclk) if (bclk)
......
...@@ -209,18 +209,22 @@ unsigned long cal_r4koff(void) ...@@ -209,18 +209,22 @@ unsigned long cal_r4koff(void)
while (au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S); while (au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S);
au_writel(0, SYS_TOYWRITE); au_writel(0, SYS_TOYWRITE);
while (au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S); while (au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S);
} else
no_au1xxx_32khz = 1;
cpu_speed = (au_readl(SYS_CPUPLL) & 0x0000003f) * /*
AU1000_SRC_CLK; * On early Au1000, sys_cpupll was write-only. Since these
} * silicon versions of Au1000 are not sold by AMD, we don't bend
else { * over backwards trying to determine the frequency.
/* The 32KHz oscillator isn't running, so assume there */
* isn't one and grab the processor speed from the PLL. if (cur_cpu_spec[0]->cpu_pll_wo)
* NOTE: some old silicon doesn't allow reading the PLL. #ifdef CONFIG_SOC_AU1000_FREQUENCY
*/ cpu_speed = CONFIG_SOC_AU1000_FREQUENCY;
#else
cpu_speed = 396000000;
#endif
else
cpu_speed = (au_readl(SYS_CPUPLL) & 0x0000003f) * AU1000_SRC_CLK; cpu_speed = (au_readl(SYS_CPUPLL) & 0x0000003f) * AU1000_SRC_CLK;
no_au1xxx_32khz = 1;
}
mips_hpt_frequency = cpu_speed; mips_hpt_frequency = cpu_speed;
// Equation: Baudrate = CPU / (SD * 2 * CLKDIV * 16) // Equation: Baudrate = CPU / (SD * 2 * CLKDIV * 16)
set_au1x00_uart_baud_base(cpu_speed / (2 * ((int)(au_readl(SYS_POWERCTRL)&0x03) + 2) * 16)); set_au1x00_uart_baud_base(cpu_speed / (2 * ((int)(au_readl(SYS_POWERCTRL)&0x03) + 2) * 16));
......
...@@ -33,11 +33,10 @@ ...@@ -33,11 +33,10 @@
#include <asm/cpu.h> #include <asm/cpu.h>
#include <asm/bootinfo.h> #include <asm/bootinfo.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/keyboard.h>
#include <asm/mipsregs.h> #include <asm/mipsregs.h>
#include <asm/reboot.h> #include <asm/reboot.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/au1000.h> #include <asm/mach-au1x00/au1000.h>
void board_reset(void) void board_reset(void)
{ {
......
...@@ -45,7 +45,7 @@ ...@@ -45,7 +45,7 @@
#include <asm/io.h> #include <asm/io.h>
#include <asm/mipsregs.h> #include <asm/mipsregs.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/au1000.h> #include <asm/mach-au1x00/au1000.h>
struct au1xxx_irqmap __initdata au1xxx_irq_map[] = { struct au1xxx_irqmap __initdata au1xxx_irq_map[] = {
{ AU1500_GPIO_204, INTC_INT_HIGH_LEVEL, 0}, { AU1500_GPIO_204, INTC_INT_HIGH_LEVEL, 0},
......
此差异已折叠。
...@@ -139,7 +139,6 @@ ...@@ -139,7 +139,6 @@
#include <asm/system.h> #include <asm/system.h>
#include <asm/gdb-stub.h> #include <asm/gdb-stub.h>
#include <asm/inst.h> #include <asm/inst.h>
#include <asm/smp.h>
/* /*
* external low-level support routines * external low-level support routines
...@@ -656,6 +655,7 @@ void set_async_breakpoint(unsigned long *epc) ...@@ -656,6 +655,7 @@ void set_async_breakpoint(unsigned long *epc)
*epc = (unsigned long)async_breakpoint; *epc = (unsigned long)async_breakpoint;
} }
#ifdef CONFIG_SMP
static void kgdb_wait(void *arg) static void kgdb_wait(void *arg)
{ {
unsigned flags; unsigned flags;
...@@ -668,6 +668,7 @@ static void kgdb_wait(void *arg) ...@@ -668,6 +668,7 @@ static void kgdb_wait(void *arg)
local_irq_restore(flags); local_irq_restore(flags);
} }
#endif
/* /*
* GDB stub needs to call kgdb_wait on all processor with interrupts * GDB stub needs to call kgdb_wait on all processor with interrupts
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <asm/time.h> #include <asm/time.h>
DEFINE_SPINLOCK(i8253_lock); DEFINE_SPINLOCK(i8253_lock);
EXPORT_SYMBOL(i8253_lock);
/* /*
* Initialize the PIT timer. * Initialize the PIT timer.
......
...@@ -157,6 +157,6 @@ void __init time_init(void) ...@@ -157,6 +157,6 @@ void __init time_init(void)
{ {
plat_time_init(); plat_time_init();
if (mips_clockevent_init() || !cpu_has_mfc0_count_bug()) if (!mips_clockevent_init() || !cpu_has_mfc0_count_bug())
init_mips_clocksource(); init_mips_clocksource();
} }
...@@ -262,13 +262,21 @@ void dump_mtregs(void) ...@@ -262,13 +262,21 @@ void dump_mtregs(void)
/* Find some VPE program space */ /* Find some VPE program space */
static void *alloc_progmem(unsigned long len) static void *alloc_progmem(unsigned long len)
{ {
void *addr;
#ifdef CONFIG_MIPS_VPE_LOADER_TOM #ifdef CONFIG_MIPS_VPE_LOADER_TOM
/* this means you must tell linux to use less memory than you physically have */ /*
return pfn_to_kaddr(max_pfn); * This means you must tell Linux to use less memory than you
* physically have, for example by passing a mem= boot argument.
*/
addr = pfn_to_kaddr(max_pfn);
memset(addr, 0, len);
#else #else
// simple grab some mem for now /* simple grab some mem for now */
return kmalloc(len, GFP_KERNEL); addr = kzalloc(len, GFP_KERNEL);
#endif #endif
return addr;
} }
static void release_progmem(void *ptr) static void release_progmem(void *ptr)
...@@ -884,9 +892,10 @@ static int vpe_elfload(struct vpe * v) ...@@ -884,9 +892,10 @@ static int vpe_elfload(struct vpe * v)
} }
v->load_addr = alloc_progmem(mod.core_size); v->load_addr = alloc_progmem(mod.core_size);
memset(v->load_addr, 0, mod.core_size); if (!v->load_addr)
return -ENOMEM;
printk("VPE loader: loading to %p\n", v->load_addr); pr_info("VPE loader: loading to %p\n", v->load_addr);
if (relocate) { if (relocate) {
for (i = 0; i < hdr->e_shnum; i++) { for (i = 0; i < hdr->e_shnum; i++) {
......
...@@ -361,6 +361,16 @@ static inline int has_valid_asid(const struct mm_struct *mm) ...@@ -361,6 +361,16 @@ static inline int has_valid_asid(const struct mm_struct *mm)
#endif #endif
} }
static void r4k__flush_cache_vmap(void)
{
r4k_blast_dcache();
}
static void r4k__flush_cache_vunmap(void)
{
r4k_blast_dcache();
}
static inline void local_r4k_flush_cache_range(void * args) static inline void local_r4k_flush_cache_range(void * args)
{ {
struct vm_area_struct *vma = args; struct vm_area_struct *vma = args;
...@@ -1281,6 +1291,10 @@ void __cpuinit r4k_cache_init(void) ...@@ -1281,6 +1291,10 @@ void __cpuinit r4k_cache_init(void)
PAGE_SIZE - 1); PAGE_SIZE - 1);
else else
shm_align_mask = PAGE_SIZE-1; shm_align_mask = PAGE_SIZE-1;
__flush_cache_vmap = r4k__flush_cache_vmap;
__flush_cache_vunmap = r4k__flush_cache_vunmap;
flush_cache_all = cache_noop; flush_cache_all = cache_noop;
__flush_cache_all = r4k___flush_cache_all; __flush_cache_all = r4k___flush_cache_all;
flush_cache_mm = r4k_flush_cache_mm; flush_cache_mm = r4k_flush_cache_mm;
......
...@@ -122,6 +122,16 @@ static inline void tx39_blast_icache(void) ...@@ -122,6 +122,16 @@ static inline void tx39_blast_icache(void)
local_irq_restore(flags); local_irq_restore(flags);
} }
static void tx39__flush_cache_vmap(void)
{
tx39_blast_dcache();
}
static void tx39__flush_cache_vunmap(void)
{
tx39_blast_dcache();
}
static inline void tx39_flush_cache_all(void) static inline void tx39_flush_cache_all(void)
{ {
if (!cpu_has_dc_aliases) if (!cpu_has_dc_aliases)
...@@ -344,6 +354,8 @@ void __cpuinit tx39_cache_init(void) ...@@ -344,6 +354,8 @@ void __cpuinit tx39_cache_init(void)
switch (current_cpu_type()) { switch (current_cpu_type()) {
case CPU_TX3912: case CPU_TX3912:
/* TX39/H core (writethru direct-map cache) */ /* TX39/H core (writethru direct-map cache) */
__flush_cache_vmap = tx39__flush_cache_vmap;
__flush_cache_vunmap = tx39__flush_cache_vunmap;
flush_cache_all = tx39h_flush_icache_all; flush_cache_all = tx39h_flush_icache_all;
__flush_cache_all = tx39h_flush_icache_all; __flush_cache_all = tx39h_flush_icache_all;
flush_cache_mm = (void *) tx39h_flush_icache_all; flush_cache_mm = (void *) tx39h_flush_icache_all;
...@@ -369,6 +381,9 @@ void __cpuinit tx39_cache_init(void) ...@@ -369,6 +381,9 @@ void __cpuinit tx39_cache_init(void)
write_c0_wired(0); /* set 8 on reset... */ write_c0_wired(0); /* set 8 on reset... */
/* board-dependent init code may set WBON */ /* board-dependent init code may set WBON */
__flush_cache_vmap = tx39__flush_cache_vmap;
__flush_cache_vunmap = tx39__flush_cache_vunmap;
flush_cache_all = tx39_flush_cache_all; flush_cache_all = tx39_flush_cache_all;
__flush_cache_all = tx39___flush_cache_all; __flush_cache_all = tx39___flush_cache_all;
flush_cache_mm = tx39_flush_cache_mm; flush_cache_mm = tx39_flush_cache_mm;
......
...@@ -30,6 +30,9 @@ void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, ...@@ -30,6 +30,9 @@ void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page,
unsigned long pfn); unsigned long pfn);
void (*flush_icache_range)(unsigned long start, unsigned long end); void (*flush_icache_range)(unsigned long start, unsigned long end);
void (*__flush_cache_vmap)(void);
void (*__flush_cache_vunmap)(void);
/* MIPS specific cache operations */ /* MIPS specific cache operations */
void (*flush_cache_sigtramp)(unsigned long addr); void (*flush_cache_sigtramp)(unsigned long addr);
void (*local_flush_data_cache_page)(void * addr); void (*local_flush_data_cache_page)(void * addr);
......
...@@ -307,6 +307,7 @@ static void __cpuinit build_tlb_write_entry(u32 **p, struct uasm_label **l, ...@@ -307,6 +307,7 @@ static void __cpuinit build_tlb_write_entry(u32 **p, struct uasm_label **l,
case CPU_R12000: case CPU_R12000:
case CPU_R14000: case CPU_R14000:
case CPU_4KC: case CPU_4KC:
case CPU_4KEC:
case CPU_SB1: case CPU_SB1:
case CPU_SB1A: case CPU_SB1A:
case CPU_4KSC: case CPU_4KSC:
......
...@@ -185,8 +185,8 @@ static struct resource bcm1480_mem_resource = { ...@@ -185,8 +185,8 @@ static struct resource bcm1480_mem_resource = {
static struct resource bcm1480_io_resource = { static struct resource bcm1480_io_resource = {
.name = "BCM1480 PCI I/O", .name = "BCM1480 PCI I/O",
.start = 0x2c000000UL, .start = A_BCM1480_PHYS_PCI_IO_MATCH_BYTES,
.end = 0x2dffffffUL, .end = A_BCM1480_PHYS_PCI_IO_MATCH_BYTES + 0x1ffffffUL,
.flags = IORESOURCE_IO, .flags = IORESOURCE_IO,
}; };
...@@ -194,6 +194,7 @@ struct pci_controller bcm1480_controller = { ...@@ -194,6 +194,7 @@ struct pci_controller bcm1480_controller = {
.pci_ops = &bcm1480_pci_ops, .pci_ops = &bcm1480_pci_ops,
.mem_resource = &bcm1480_mem_resource, .mem_resource = &bcm1480_mem_resource,
.io_resource = &bcm1480_io_resource, .io_resource = &bcm1480_io_resource,
.io_offset = A_BCM1480_PHYS_PCI_IO_MATCH_BYTES,
}; };
...@@ -251,6 +252,7 @@ static int __init bcm1480_pcibios_init(void) ...@@ -251,6 +252,7 @@ static int __init bcm1480_pcibios_init(void)
bcm1480_controller.io_map_base = (unsigned long) bcm1480_controller.io_map_base = (unsigned long)
ioremap(A_BCM1480_PHYS_PCI_IO_MATCH_BYTES, 65536); ioremap(A_BCM1480_PHYS_PCI_IO_MATCH_BYTES, 65536);
bcm1480_controller.io_map_base -= bcm1480_controller.io_offset;
set_io_port_base(bcm1480_controller.io_map_base); set_io_port_base(bcm1480_controller.io_map_base);
isa_slot_offset = (unsigned long) isa_slot_offset = (unsigned long)
ioremap(A_BCM1480_PHYS_PCI_MEM_MATCH_BYTES, 1024*1024); ioremap(A_BCM1480_PHYS_PCI_MEM_MATCH_BYTES, 1024*1024);
......
...@@ -180,8 +180,8 @@ static struct resource bcm1480ht_mem_resource = { ...@@ -180,8 +180,8 @@ static struct resource bcm1480ht_mem_resource = {
static struct resource bcm1480ht_io_resource = { static struct resource bcm1480ht_io_resource = {
.name = "BCM1480 HT I/O", .name = "BCM1480 HT I/O",
.start = 0x00000000UL, .start = A_BCM1480_PHYS_HT_IO_MATCH_BYTES,
.end = 0x01ffffffUL, .end = A_BCM1480_PHYS_HT_IO_MATCH_BYTES + 0x01ffffffUL,
.flags = IORESOURCE_IO, .flags = IORESOURCE_IO,
}; };
...@@ -191,29 +191,22 @@ struct pci_controller bcm1480ht_controller = { ...@@ -191,29 +191,22 @@ struct pci_controller bcm1480ht_controller = {
.io_resource = &bcm1480ht_io_resource, .io_resource = &bcm1480ht_io_resource,
.index = 1, .index = 1,
.get_busno = bcm1480ht_pcibios_get_busno, .get_busno = bcm1480ht_pcibios_get_busno,
.io_offset = A_BCM1480_PHYS_HT_IO_MATCH_BYTES,
}; };
static int __init bcm1480ht_pcibios_init(void) static int __init bcm1480ht_pcibios_init(void)
{ {
uint32_t cmdreg;
ht_cfg_space = ioremap(A_BCM1480_PHYS_HT_CFG_MATCH_BITS, 16*1024*1024); ht_cfg_space = ioremap(A_BCM1480_PHYS_HT_CFG_MATCH_BITS, 16*1024*1024);
/* /* CFE doesn't always init all HT paths, so we always scan */
* See if the PCI bus has been configured by the firmware.
*/
cmdreg = READCFG32(CFGOFFSET(0, PCI_DEVFN(PCI_BRIDGE_DEVICE, 0),
PCI_COMMAND));
if (!(cmdreg & PCI_COMMAND_MASTER)) {
printk("HT: Skipping HT probe. Bus is not initialized.\n");
iounmap(ht_cfg_space);
return 1; /* XXX */
}
bcm1480ht_bus_status |= PCI_BUS_ENABLED; bcm1480ht_bus_status |= PCI_BUS_ENABLED;
ht_eoi_space = (unsigned long) ht_eoi_space = (unsigned long)
ioremap(A_BCM1480_PHYS_HT_SPECIAL_MATCH_BYTES, ioremap(A_BCM1480_PHYS_HT_SPECIAL_MATCH_BYTES,
4 * 1024 * 1024); 4 * 1024 * 1024);
bcm1480ht_controller.io_map_base = (unsigned long)
ioremap(A_BCM1480_PHYS_HT_IO_MATCH_BYTES, 65536);
bcm1480ht_controller.io_map_base -= bcm1480ht_controller.io_offset;
register_pci_controller(&bcm1480ht_controller); register_pci_controller(&bcm1480ht_controller);
......
...@@ -212,13 +212,30 @@ ...@@ -212,13 +212,30 @@
ethernet@3000 { ethernet@3000 {
device_type = "network"; device_type = "network";
compatible = "fsl,mpc5200b-fec","fsl,mpc5200-fec"; compatible = "fsl,mpc5200b-fec","fsl,mpc5200-fec";
reg = <3000 800>; reg = <3000 400>;
local-mac-address = [ 00 00 00 00 00 00 ]; local-mac-address = [ 00 00 00 00 00 00 ];
interrupts = <2 5 0>; interrupts = <2 5 0>;
interrupt-parent = <&mpc5200_pic>; interrupt-parent = <&mpc5200_pic>;
phy-handle = <&phy0>;
};
mdio@3000 {
#address-cells = <1>;
#size-cells = <0>;
compatible = "fsl,mpc5200b-mdio","fsl,mpc5200-mdio";
reg = <3000 400>; // fec range, since we need to setup fec interrupts
interrupts = <2 5 0>; // these are for "mii command finished", not link changes & co.
interrupt-parent = <&mpc5200_pic>;
phy0: ethernet-phy@0 {
device_type = "ethernet-phy";
reg = <0>;
};
}; };
i2c@3d40 { i2c@3d40 {
#address-cells = <1>;
#size-cells = <0>;
compatible = "fsl,mpc5200b-i2c","fsl,mpc5200-i2c","fsl-i2c"; compatible = "fsl,mpc5200b-i2c","fsl,mpc5200-i2c","fsl-i2c";
reg = <3d40 40>; reg = <3d40 40>;
interrupts = <2 10 0>; interrupts = <2 10 0>;
...@@ -231,4 +248,22 @@ ...@@ -231,4 +248,22 @@
reg = <8000 4000>; reg = <8000 4000>;
}; };
}; };
lpb {
model = "fsl,lpb";
compatible = "fsl,lpb";
#address-cells = <2>;
#size-cells = <1>;
ranges = <0 0 fc000000 2000000>;
// 16-bit flash device at LocalPlus Bus CS0
flash@0,0 {
compatible = "cfi-flash";
reg = <0 0 2000000>;
bank-width = <2>;
device-width = <2>;
#size-cells = <1>;
#address-cells = <1>;
};
};
}; };
...@@ -258,6 +258,21 @@ ...@@ -258,6 +258,21 @@
local-mac-address = [ 00 00 00 00 00 00 ]; local-mac-address = [ 00 00 00 00 00 00 ];
interrupts = <2 5 0>; interrupts = <2 5 0>;
interrupt-parent = <&mpc5200_pic>; interrupt-parent = <&mpc5200_pic>;
phy-handle = <&phy0>;
};
mdio@3000 {
#address-cells = <1>;
#size-cells = <0>;
compatible = "fsl,mpc5200-mdio";
reg = <3000 400>; // fec range, since we need to setup fec interrupts
interrupts = <2 5 0>; // these are for "mii command finished", not link changes & co.
interrupt-parent = <&mpc5200_pic>;
phy0:ethernet-phy@1 {
device_type = "ethernet-phy";
reg = <1>;
};
}; };
ata@3a00 { ata@3a00 {
......
此差异已折叠。
...@@ -255,14 +255,14 @@ ...@@ -255,14 +255,14 @@
}; };
sata@18000 { sata@18000 {
compatible = "fsl,mpc8379-sata"; compatible = "fsl,mpc8379-sata", "fsl,pq-sata";
reg = <0x18000 0x1000>; reg = <0x18000 0x1000>;
interrupts = <44 0x8>; interrupts = <44 0x8>;
interrupt-parent = <&ipic>; interrupt-parent = <&ipic>;
}; };
sata@19000 { sata@19000 {
compatible = "fsl,mpc8379-sata"; compatible = "fsl,mpc8379-sata", "fsl,pq-sata";
reg = <0x19000 0x1000>; reg = <0x19000 0x1000>;
interrupts = <45 0x8>; interrupts = <45 0x8>;
interrupt-parent = <&ipic>; interrupt-parent = <&ipic>;
......
...@@ -143,7 +143,6 @@ ...@@ -143,7 +143,6 @@
mode = "cpu"; mode = "cpu";
}; };
/* phy type (ULPI, UTMI, UTMI_WIDE, SERIAL) */
usb@23000 { usb@23000 {
compatible = "fsl-usb2-dr"; compatible = "fsl-usb2-dr";
reg = <0x23000 0x1000>; reg = <0x23000 0x1000>;
...@@ -151,7 +150,7 @@ ...@@ -151,7 +150,7 @@
#size-cells = <0>; #size-cells = <0>;
interrupt-parent = <&ipic>; interrupt-parent = <&ipic>;
interrupts = <38 0x8>; interrupts = <38 0x8>;
phy_type = "utmi"; phy_type = "ulpi";
}; };
mdio@24520 { mdio@24520 {
......
...@@ -143,7 +143,6 @@ ...@@ -143,7 +143,6 @@
mode = "cpu"; mode = "cpu";
}; };
/* phy type (ULPI, UTMI, UTMI_WIDE, SERIAL) */
usb@23000 { usb@23000 {
compatible = "fsl-usb2-dr"; compatible = "fsl-usb2-dr";
reg = <0x23000 0x1000>; reg = <0x23000 0x1000>;
...@@ -151,7 +150,7 @@ ...@@ -151,7 +150,7 @@
#size-cells = <0>; #size-cells = <0>;
interrupt-parent = <&ipic>; interrupt-parent = <&ipic>;
interrupts = <38 0x8>; interrupts = <38 0x8>;
phy_type = "utmi"; phy_type = "ulpi";
}; };
mdio@24520 { mdio@24520 {
......
...@@ -143,7 +143,6 @@ ...@@ -143,7 +143,6 @@
mode = "cpu"; mode = "cpu";
}; };
/* phy type (ULPI, UTMI, UTMI_WIDE, SERIAL) */
usb@23000 { usb@23000 {
compatible = "fsl-usb2-dr"; compatible = "fsl-usb2-dr";
reg = <0x23000 0x1000>; reg = <0x23000 0x1000>;
...@@ -151,7 +150,7 @@ ...@@ -151,7 +150,7 @@
#size-cells = <0>; #size-cells = <0>;
interrupt-parent = <&ipic>; interrupt-parent = <&ipic>;
interrupts = <38 0x8>; interrupts = <38 0x8>;
phy_type = "utmi"; phy_type = "ulpi";
}; };
mdio@24520 { mdio@24520 {
......
此差异已折叠。
此差异已折叠。
...@@ -143,7 +143,6 @@ void local_irq_restore(unsigned long en) ...@@ -143,7 +143,6 @@ void local_irq_restore(unsigned long en)
*/ */
if (local_paca->lppaca_ptr->int_dword.any_int) if (local_paca->lppaca_ptr->int_dword.any_int)
iseries_handle_interrupts(); iseries_handle_interrupts();
return;
} }
/* /*
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
#include <linux/elfcore.h> #include <linux/elfcore.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <asm/fpu.h>
/* /*
* Capture the user space registers if the task is not running (in user space) * Capture the user space registers if the task is not running (in user space)
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册