提交 8644d2a4 编写于 作者: G Greg KH 提交者: Greg Kroah-Hartman
...@@ -44,9 +44,9 @@ running, the suggested command should tell you. ...@@ -44,9 +44,9 @@ running, the suggested command should tell you.
Again, keep in mind that this list assumes you are already Again, keep in mind that this list assumes you are already
functionally running a Linux 2.4 kernel. Also, not all tools are functionally running a Linux 2.4 kernel. Also, not all tools are
necessary on all systems; obviously, if you don't have any PCMCIA (PC necessary on all systems; obviously, if you don't have any ISDN
Card) hardware, for example, you probably needn't concern yourself hardware, for example, you probably needn't concern yourself with
with pcmcia-cs. isdn4k-utils.
o Gnu C 2.95.3 # gcc --version o Gnu C 2.95.3 # gcc --version
o Gnu make 3.79.1 # make --version o Gnu make 3.79.1 # make --version
...@@ -57,6 +57,7 @@ o e2fsprogs 1.29 # tune2fs ...@@ -57,6 +57,7 @@ o e2fsprogs 1.29 # tune2fs
o jfsutils 1.1.3 # fsck.jfs -V o jfsutils 1.1.3 # fsck.jfs -V
o reiserfsprogs 3.6.3 # reiserfsck -V 2>&1|grep reiserfsprogs o reiserfsprogs 3.6.3 # reiserfsck -V 2>&1|grep reiserfsprogs
o xfsprogs 2.6.0 # xfs_db -V o xfsprogs 2.6.0 # xfs_db -V
o pcmciautils 001
o pcmcia-cs 3.1.21 # cardmgr -V o pcmcia-cs 3.1.21 # cardmgr -V
o quota-tools 3.09 # quota -V o quota-tools 3.09 # quota -V
o PPP 2.4.0 # pppd --version o PPP 2.4.0 # pppd --version
...@@ -186,13 +187,20 @@ architecture independent and any version from 2.0.0 onward should ...@@ -186,13 +187,20 @@ architecture independent and any version from 2.0.0 onward should
work correctly with this version of the XFS kernel code (2.6.0 or work correctly with this version of the XFS kernel code (2.6.0 or
later is recommended, due to some significant improvements). later is recommended, due to some significant improvements).
PCMCIAutils
-----------
PCMCIAutils replaces pcmcia-cs (see below). It properly sets up
PCMCIA sockets at system startup and loads the appropriate modules
for 16-bit PCMCIA devices if the kernel is modularized and the hotplug
subsystem is used.
Pcmcia-cs Pcmcia-cs
--------- ---------
PCMCIA (PC Card) support is now partially implemented in the main PCMCIA (PC Card) support is now partially implemented in the main
kernel source. Pay attention when you recompile your kernel ;-). kernel source. The "pcmciautils" package (see above) replaces pcmcia-cs
Also, be sure to upgrade to the latest pcmcia-cs release. for newest kernels.
Quota-tools Quota-tools
----------- -----------
...@@ -349,9 +357,13 @@ Xfsprogs ...@@ -349,9 +357,13 @@ Xfsprogs
-------- --------
o <ftp://oss.sgi.com/projects/xfs/download/> o <ftp://oss.sgi.com/projects/xfs/download/>
Pcmciautils
-----------
o <ftp://ftp.kernel.org/pub/linux/utils/kernel/pcmcia/>
Pcmcia-cs Pcmcia-cs
--------- ---------
o <ftp://pcmcia-cs.sourceforge.net/pub/pcmcia-cs/pcmcia-cs-3.1.21.tar.gz> o <http://pcmcia-cs.sourceforge.net/>
Quota-tools Quota-tools
---------- ----------
......
Block io priorities
===================
Intro
-----
With the introduction of cfq v3 (aka cfq-ts or time sliced cfq), basic io
priorities is supported for reads on files. This enables users to io nice
processes or process groups, similar to what has been possible to cpu
scheduling for ages. This document mainly details the current possibilites
with cfq, other io schedulers do not support io priorities so far.
Scheduling classes
------------------
CFQ implements three generic scheduling classes that determine how io is
served for a process.
IOPRIO_CLASS_RT: This is the realtime io class. This scheduling class is given
higher priority than any other in the system, processes from this class are
given first access to the disk every time. Thus it needs to be used with some
care, one io RT process can starve the entire system. Within the RT class,
there are 8 levels of class data that determine exactly how much time this
process needs the disk for on each service. In the future this might change
to be more directly mappable to performance, by passing in a wanted data
rate instead.
IOPRIO_CLASS_BE: This is the best-effort scheduling class, which is the default
for any process that hasn't set a specific io priority. The class data
determines how much io bandwidth the process will get, it's directly mappable
to the cpu nice levels just more coarsely implemented. 0 is the highest
BE prio level, 7 is the lowest. The mapping between cpu nice level and io
nice level is determined as: io_nice = (cpu_nice + 20) / 5.
IOPRIO_CLASS_IDLE: This is the idle scheduling class, processes running at this
level only get io time when no one else needs the disk. The idle class has no
class data, since it doesn't really apply here.
Tools
-----
See below for a sample ionice tool. Usage:
# ionice -c<class> -n<level> -p<pid>
If pid isn't given, the current process is assumed. IO priority settings
are inherited on fork, so you can use ionice to start the process at a given
level:
# ionice -c2 -n0 /bin/ls
will run ls at the best-effort scheduling class at the highest priority.
For a running process, you can give the pid instead:
# ionice -c1 -n2 -p100
will change pid 100 to run at the realtime scheduling class, at priority 2.
---> snip ionice.c tool <---
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <getopt.h>
#include <unistd.h>
#include <sys/ptrace.h>
#include <asm/unistd.h>
extern int sys_ioprio_set(int, int, int);
extern int sys_ioprio_get(int, int);
#if defined(__i386__)
#define __NR_ioprio_set 289
#define __NR_ioprio_get 290
#elif defined(__ppc__)
#define __NR_ioprio_set 273
#define __NR_ioprio_get 274
#elif defined(__x86_64__)
#define __NR_ioprio_set 251
#define __NR_ioprio_get 252
#elif defined(__ia64__)
#define __NR_ioprio_set 1274
#define __NR_ioprio_get 1275
#else
#error "Unsupported arch"
#endif
_syscall3(int, ioprio_set, int, which, int, who, int, ioprio);
_syscall2(int, ioprio_get, int, which, int, who);
enum {
IOPRIO_CLASS_NONE,
IOPRIO_CLASS_RT,
IOPRIO_CLASS_BE,
IOPRIO_CLASS_IDLE,
};
enum {
IOPRIO_WHO_PROCESS = 1,
IOPRIO_WHO_PGRP,
IOPRIO_WHO_USER,
};
#define IOPRIO_CLASS_SHIFT 13
const char *to_prio[] = { "none", "realtime", "best-effort", "idle", };
int main(int argc, char *argv[])
{
int ioprio = 4, set = 0, ioprio_class = IOPRIO_CLASS_BE;
int c, pid = 0;
while ((c = getopt(argc, argv, "+n:c:p:")) != EOF) {
switch (c) {
case 'n':
ioprio = strtol(optarg, NULL, 10);
set = 1;
break;
case 'c':
ioprio_class = strtol(optarg, NULL, 10);
set = 1;
break;
case 'p':
pid = strtol(optarg, NULL, 10);
break;
}
}
switch (ioprio_class) {
case IOPRIO_CLASS_NONE:
ioprio_class = IOPRIO_CLASS_BE;
break;
case IOPRIO_CLASS_RT:
case IOPRIO_CLASS_BE:
break;
case IOPRIO_CLASS_IDLE:
ioprio = 7;
break;
default:
printf("bad prio class %d\n", ioprio_class);
return 1;
}
if (!set) {
if (!pid && argv[optind])
pid = strtol(argv[optind], NULL, 10);
ioprio = ioprio_get(IOPRIO_WHO_PROCESS, pid);
printf("pid=%d, %d\n", pid, ioprio);
if (ioprio == -1)
perror("ioprio_get");
else {
ioprio_class = ioprio >> IOPRIO_CLASS_SHIFT;
ioprio = ioprio & 0xff;
printf("%s: prio %d\n", to_prio[ioprio_class], ioprio);
}
} else {
if (ioprio_set(IOPRIO_WHO_PROCESS, pid, ioprio | ioprio_class << IOPRIO_CLASS_SHIFT) == -1) {
perror("ioprio_set");
return 1;
}
if (argv[optind])
execvp(argv[optind], &argv[optind]);
}
return 0;
}
---> snip ionice.c tool <---
March 11 2005, Jens Axboe <axboe@suse.de>
...@@ -17,6 +17,7 @@ This driver is known to work with the following cards: ...@@ -17,6 +17,7 @@ This driver is known to work with the following cards:
* SA P600 * SA P600
* SA P800 * SA P800
* SA E400 * SA E400
* SA E300
If nodes are not already created in the /dev/cciss directory, run as root: If nodes are not already created in the /dev/cciss directory, run as root:
......
...@@ -1119,7 +1119,7 @@ running once the system is up. ...@@ -1119,7 +1119,7 @@ running once the system is up.
See Documentation/ramdisk.txt. See Documentation/ramdisk.txt.
psmouse.proto= [HW,MOUSE] Highest PS2 mouse protocol extension to psmouse.proto= [HW,MOUSE] Highest PS2 mouse protocol extension to
probe for (bare|imps|exps). probe for (bare|imps|exps|lifebook|any).
psmouse.rate= [HW,MOUSE] Set desired mouse report rate, in reports psmouse.rate= [HW,MOUSE] Set desired mouse report rate, in reports
per second. per second.
psmouse.resetafter= psmouse.resetafter=
......
Matching of PCMCIA devices to drivers is done using one or more of the
following criteria:
- manufactor ID
- card ID
- product ID strings _and_ hashes of these strings
- function ID
- device function (actual and pseudo)
You should use the helpers in include/pcmcia/device_id.h for generating the
struct pcmcia_device_id[] entries which match devices to drivers.
If you want to match product ID strings, you also need to pass the crc32
hashes of the string to the macro, e.g. if you want to match the product ID
string 1, you need to use
PCMCIA_DEVICE_PROD_ID1("some_string", 0x(hash_of_some_string)),
If the hash is incorrect, the kernel will inform you about this in "dmesg"
upon module initialization, and tell you of the correct hash.
You can determine the hash of the product ID strings by running
"pcmcia-modalias %n.%m" [%n being replaced with the socket number and %m being
replaced with the device function] from pcmciautils. It generates a string
in the following form:
pcmcia:m0149cC1ABf06pfn00fn00pa725B842DpbF1EFEE84pc0877B627pd00000000
The hex value after "pa" is the hash of product ID string 1, after "pb" for
string 2 and so on.
Alternatively, you can use this small tool to determine the crc32 hash.
simply pass the string you want to evaluate as argument to this program,
e.g.
$ ./crc32hash "Dual Speed"
-------------------------------------------------------------------------
/* crc32hash.c - derived from linux/lib/crc32.c, GNU GPL v2 */
#include <string.h>
#include <stdio.h>
#include <ctype.h>
#include <stdlib.h>
unsigned int crc32(unsigned char const *p, unsigned int len)
{
int i;
unsigned int crc = 0;
while (len--) {
crc ^= *p++;
for (i = 0; i < 8; i++)
crc = (crc >> 1) ^ ((crc & 1) ? 0xedb88320 : 0);
}
return crc;
}
int main(int argc, char **argv) {
unsigned int result;
if (argc != 2) {
printf("no string passed as argument\n");
return -1;
}
result = crc32(argv[1], strlen(argv[1]));
printf("0x%x\n", result);
return 0;
}
This file details changes in 2.6 which affect PCMCIA card driver authors:
* in-kernel device<->driver matching
PCMCIA devices and their correct drivers can now be matched in
kernelspace. See 'devicetable.txt' for details.
* Device model integration (as of 2.6.11)
A struct pcmcia_device is registered with the device model core,
and can be used (e.g. for SET_NETDEV_DEV) by using
handle_to_dev(client_handle_t * handle).
* Convert internal I/O port addresses to unsigned long (as of 2.6.11)
ioaddr_t should be replaced by kio_addr_t in PCMCIA card drivers.
* irq_mask and irq_list parameters (as of 2.6.11)
The irq_mask and irq_list parameters should no longer be used in
PCMCIA card drivers. Instead, it is the job of the PCMCIA core to
determine which IRQ should be used. Therefore, link->irq.IRQInfo2
is ignored.
* client->PendingEvents is gone (as of 2.6.11)
client->PendingEvents is no longer available.
* client->Attributes are gone (as of 2.6.11)
client->Attributes is unused, therefore it is removed from all
PCMCIA card drivers
* core functions no longer available (as of 2.6.11)
The following functions have been removed from the kernel source
because they are unused by all in-kernel drivers, and no external
driver was reported to rely on them:
pcmcia_get_first_region()
pcmcia_get_next_region()
pcmcia_modify_window()
pcmcia_set_event_mask()
pcmcia_get_first_window()
pcmcia_get_next_window()
* device list iteration upon module removal (as of 2.6.10)
It is no longer necessary to iterate on the driver's internal
client list and call the ->detach() function upon module removal.
* Resource management. (as of 2.6.8)
Although the PCMCIA subsystem will allocate resources for cards,
it no longer marks these resources busy. This means that driver
authors are now responsible for claiming your resources as per
other drivers in Linux. You should use request_region() to mark
your IO regions in-use, and request_mem_region() to mark your
memory regions in-use. The name argument should be a pointer to
your driver name. Eg, for pcnet_cs, name should point to the
string "pcnet_cs".
...@@ -1149,7 +1149,7 @@ S: Maintained ...@@ -1149,7 +1149,7 @@ S: Maintained
INFINIBAND SUBSYSTEM INFINIBAND SUBSYSTEM
P: Roland Dreier P: Roland Dreier
M: roland@topspin.com M: rolandd@cisco.com
P: Sean Hefty P: Sean Hefty
M: mshefty@ichips.intel.com M: mshefty@ichips.intel.com
P: Hal Rosenstock P: Hal Rosenstock
......
...@@ -32,6 +32,7 @@ ...@@ -32,6 +32,7 @@
#include <asm/leds.h> #include <asm/leds.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/mach/time.h>
extern const char *processor_modes[]; extern const char *processor_modes[];
extern void setup_mm_for_reboot(char mode); extern void setup_mm_for_reboot(char mode);
...@@ -85,8 +86,10 @@ EXPORT_SYMBOL(pm_power_off); ...@@ -85,8 +86,10 @@ EXPORT_SYMBOL(pm_power_off);
void default_idle(void) void default_idle(void)
{ {
local_irq_disable(); local_irq_disable();
if (!need_resched() && !hlt_counter) if (!need_resched() && !hlt_counter) {
timer_dyn_reprogram();
arch_idle(); arch_idle();
}
local_irq_enable(); local_irq_enable();
} }
......
...@@ -424,15 +424,19 @@ static int timer_dyn_tick_disable(void) ...@@ -424,15 +424,19 @@ static int timer_dyn_tick_disable(void)
return ret; return ret;
} }
/*
* Reprogram the system timer for at least the calculated time interval.
* This function should be called from the idle thread with IRQs disabled,
* immediately before sleeping.
*/
void timer_dyn_reprogram(void) void timer_dyn_reprogram(void)
{ {
struct dyn_tick_timer *dyn_tick = system_timer->dyn_tick; struct dyn_tick_timer *dyn_tick = system_timer->dyn_tick;
unsigned long flags;
write_seqlock_irqsave(&xtime_lock, flags); write_seqlock(&xtime_lock);
if (dyn_tick->state & DYN_TICK_ENABLED) if (dyn_tick->state & DYN_TICK_ENABLED)
dyn_tick->reprogram(next_timer_interrupt() - jiffies); dyn_tick->reprogram(next_timer_interrupt() - jiffies);
write_sequnlock_irqrestore(&xtime_lock, flags); write_sequnlock(&xtime_lock);
} }
static ssize_t timer_show_dyn_tick(struct sys_device *dev, char *buf) static ssize_t timer_show_dyn_tick(struct sys_device *dev, char *buf)
......
...@@ -288,8 +288,8 @@ static void usb_release(struct device *dev) ...@@ -288,8 +288,8 @@ static void usb_release(struct device *dev)
static struct resource udc_resources[] = { static struct resource udc_resources[] = {
/* order is significant! */ /* order is significant! */
{ /* registers */ { /* registers */
.start = IO_ADDRESS(UDC_BASE), .start = UDC_BASE,
.end = IO_ADDRESS(UDC_BASE + 0xff), .end = UDC_BASE + 0xff,
.flags = IORESOURCE_MEM, .flags = IORESOURCE_MEM,
}, { /* general IRQ */ }, { /* general IRQ */
.start = IH2_BASE + 20, .start = IH2_BASE + 20,
...@@ -355,8 +355,8 @@ static struct platform_device ohci_device = { ...@@ -355,8 +355,8 @@ static struct platform_device ohci_device = {
static struct resource otg_resources[] = { static struct resource otg_resources[] = {
/* order is significant! */ /* order is significant! */
{ {
.start = IO_ADDRESS(OTG_BASE), .start = OTG_BASE,
.end = IO_ADDRESS(OTG_BASE + 0xff), .end = OTG_BASE + 0xff,
.flags = IORESOURCE_MEM, .flags = IORESOURCE_MEM,
}, { }, {
.start = IH2_BASE + 8, .start = IH2_BASE + 8,
......
...@@ -522,6 +522,69 @@ static inline void free_area(unsigned long addr, unsigned long end, char *s) ...@@ -522,6 +522,69 @@ static inline void free_area(unsigned long addr, unsigned long end, char *s)
printk(KERN_INFO "Freeing %s memory: %dK\n", s, size); printk(KERN_INFO "Freeing %s memory: %dK\n", s, size);
} }
static inline void
free_memmap(int node, unsigned long start_pfn, unsigned long end_pfn)
{
struct page *start_pg, *end_pg;
unsigned long pg, pgend;
/*
* Convert start_pfn/end_pfn to a struct page pointer.
*/
start_pg = pfn_to_page(start_pfn);
end_pg = pfn_to_page(end_pfn);
/*
* Convert to physical addresses, and
* round start upwards and end downwards.
*/
pg = PAGE_ALIGN(__pa(start_pg));
pgend = __pa(end_pg) & PAGE_MASK;
/*
* If there are free pages between these,
* free the section of the memmap array.
*/
if (pg < pgend)
free_bootmem_node(NODE_DATA(node), pg, pgend - pg);
}
/*
* The mem_map array can get very big. Free the unused area of the memory map.
*/
static void __init free_unused_memmap_node(int node, struct meminfo *mi)
{
unsigned long bank_start, prev_bank_end = 0;
unsigned int i;
/*
* [FIXME] This relies on each bank being in address order. This
* may not be the case, especially if the user has provided the
* information on the command line.
*/
for (i = 0; i < mi->nr_banks; i++) {
if (mi->bank[i].size == 0 || mi->bank[i].node != node)
continue;
bank_start = mi->bank[i].start >> PAGE_SHIFT;
if (bank_start < prev_bank_end) {
printk(KERN_ERR "MEM: unordered memory banks. "
"Not freeing memmap.\n");
break;
}
/*
* If we had a previous bank, and there is a space
* between the current bank and the previous, free it.
*/
if (prev_bank_end && prev_bank_end != bank_start)
free_memmap(node, prev_bank_end, bank_start);
prev_bank_end = (mi->bank[i].start +
mi->bank[i].size) >> PAGE_SHIFT;
}
}
/* /*
* mem_init() marks the free areas in the mem_map and tells us how much * mem_init() marks the free areas in the mem_map and tells us how much
* memory is free. This is done after various parts of the system have * memory is free. This is done after various parts of the system have
...@@ -540,16 +603,12 @@ void __init mem_init(void) ...@@ -540,16 +603,12 @@ void __init mem_init(void)
max_mapnr = virt_to_page(high_memory) - mem_map; max_mapnr = virt_to_page(high_memory) - mem_map;
#endif #endif
/*
* We may have non-contiguous memory.
*/
if (meminfo.nr_banks != 1)
create_memmap_holes(&meminfo);
/* this will put all unused low memory onto the freelists */ /* this will put all unused low memory onto the freelists */
for_each_online_node(node) { for_each_online_node(node) {
pg_data_t *pgdat = NODE_DATA(node); pg_data_t *pgdat = NODE_DATA(node);
free_unused_memmap_node(node, &meminfo);
if (pgdat->node_spanned_pages != 0) if (pgdat->node_spanned_pages != 0)
totalram_pages += free_all_bootmem_node(pgdat); totalram_pages += free_all_bootmem_node(pgdat);
} }
......
...@@ -169,7 +169,14 @@ pgd_t *get_pgd_slow(struct mm_struct *mm) ...@@ -169,7 +169,14 @@ pgd_t *get_pgd_slow(struct mm_struct *mm)
memzero(new_pgd, FIRST_KERNEL_PGD_NR * sizeof(pgd_t)); memzero(new_pgd, FIRST_KERNEL_PGD_NR * sizeof(pgd_t));
/*
* Copy over the kernel and IO PGD entries
*/
init_pgd = pgd_offset_k(0); init_pgd = pgd_offset_k(0);
memcpy(new_pgd + FIRST_KERNEL_PGD_NR, init_pgd + FIRST_KERNEL_PGD_NR,
(PTRS_PER_PGD - FIRST_KERNEL_PGD_NR) * sizeof(pgd_t));
clean_dcache_area(new_pgd, PTRS_PER_PGD * sizeof(pgd_t));
if (!vectors_high()) { if (!vectors_high()) {
/* /*
...@@ -198,14 +205,6 @@ pgd_t *get_pgd_slow(struct mm_struct *mm) ...@@ -198,14 +205,6 @@ pgd_t *get_pgd_slow(struct mm_struct *mm)
spin_unlock(&mm->page_table_lock); spin_unlock(&mm->page_table_lock);
} }
/*
* Copy over the kernel and IO PGD entries
*/
memcpy(new_pgd + FIRST_KERNEL_PGD_NR, init_pgd + FIRST_KERNEL_PGD_NR,
(PTRS_PER_PGD - FIRST_KERNEL_PGD_NR) * sizeof(pgd_t));
clean_dcache_area(new_pgd, PTRS_PER_PGD * sizeof(pgd_t));
return new_pgd; return new_pgd;
no_pte: no_pte:
...@@ -698,75 +697,3 @@ void __init iotable_init(struct map_desc *io_desc, int nr) ...@@ -698,75 +697,3 @@ void __init iotable_init(struct map_desc *io_desc, int nr)
for (i = 0; i < nr; i++) for (i = 0; i < nr; i++)
create_mapping(io_desc + i); create_mapping(io_desc + i);
} }
static inline void
free_memmap(int node, unsigned long start_pfn, unsigned long end_pfn)
{
struct page *start_pg, *end_pg;
unsigned long pg, pgend;
/*
* Convert start_pfn/end_pfn to a struct page pointer.
*/
start_pg = pfn_to_page(start_pfn);
end_pg = pfn_to_page(end_pfn);
/*
* Convert to physical addresses, and
* round start upwards and end downwards.
*/
pg = PAGE_ALIGN(__pa(start_pg));
pgend = __pa(end_pg) & PAGE_MASK;
/*
* If there are free pages between these,
* free the section of the memmap array.
*/
if (pg < pgend)
free_bootmem_node(NODE_DATA(node), pg, pgend - pg);
}
static inline void free_unused_memmap_node(int node, struct meminfo *mi)
{
unsigned long bank_start, prev_bank_end = 0;
unsigned int i;
/*
* [FIXME] This relies on each bank being in address order. This
* may not be the case, especially if the user has provided the
* information on the command line.
*/
for (i = 0; i < mi->nr_banks; i++) {
if (mi->bank[i].size == 0 || mi->bank[i].node != node)
continue;
bank_start = mi->bank[i].start >> PAGE_SHIFT;
if (bank_start < prev_bank_end) {
printk(KERN_ERR "MEM: unordered memory banks. "
"Not freeing memmap.\n");
break;
}
/*
* If we had a previous bank, and there is a space
* between the current bank and the previous, free it.
*/
if (prev_bank_end && prev_bank_end != bank_start)
free_memmap(node, prev_bank_end, bank_start);
prev_bank_end = PAGE_ALIGN(mi->bank[i].start +
mi->bank[i].size) >> PAGE_SHIFT;
}
}
/*
* The mem_map array can get very big. Free
* the unused area of the memory map.
*/
void __init create_memmap_holes(struct meminfo *mi)
{
int node;
for_each_online_node(node)
free_unused_memmap_node(node, mi);
}
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
# To add an entry into this database, please see Documentation/arm/README, # To add an entry into this database, please see Documentation/arm/README,
# or contact rmk@arm.linux.org.uk # or contact rmk@arm.linux.org.uk
# #
# Last update: Thu Mar 24 14:34:50 2005 # Last update: Thu Jun 23 20:19:33 2005
# #
# machine_is_xxx CONFIG_xxxx MACH_TYPE_xxx number # machine_is_xxx CONFIG_xxxx MACH_TYPE_xxx number
# #
...@@ -243,7 +243,7 @@ yoho ARCH_YOHO YOHO 231 ...@@ -243,7 +243,7 @@ yoho ARCH_YOHO YOHO 231
jasper ARCH_JASPER JASPER 232 jasper ARCH_JASPER JASPER 232
dsc25 ARCH_DSC25 DSC25 233 dsc25 ARCH_DSC25 DSC25 233
omap_innovator MACH_OMAP_INNOVATOR OMAP_INNOVATOR 234 omap_innovator MACH_OMAP_INNOVATOR OMAP_INNOVATOR 234
ramses ARCH_RAMSES RAMSES 235 mnci ARCH_RAMSES RAMSES 235
s28x ARCH_S28X S28X 236 s28x ARCH_S28X S28X 236
mport3 ARCH_MPORT3 MPORT3 237 mport3 ARCH_MPORT3 MPORT3 237
pxa_eagle250 ARCH_PXA_EAGLE250 PXA_EAGLE250 238 pxa_eagle250 ARCH_PXA_EAGLE250 PXA_EAGLE250 238
...@@ -323,7 +323,7 @@ nimbra29x ARCH_NIMBRA29X NIMBRA29X 311 ...@@ -323,7 +323,7 @@ nimbra29x ARCH_NIMBRA29X NIMBRA29X 311
nimbra210 ARCH_NIMBRA210 NIMBRA210 312 nimbra210 ARCH_NIMBRA210 NIMBRA210 312
hhp_d95xx ARCH_HHP_D95XX HHP_D95XX 313 hhp_d95xx ARCH_HHP_D95XX HHP_D95XX 313
labarm ARCH_LABARM LABARM 314 labarm ARCH_LABARM LABARM 314
m825xx ARCH_M825XX M825XX 315 comcerto ARCH_M825XX M825XX 315
m7100 SA1100_M7100 M7100 316 m7100 SA1100_M7100 M7100 316
nipc2 ARCH_NIPC2 NIPC2 317 nipc2 ARCH_NIPC2 NIPC2 317
fu7202 ARCH_FU7202 FU7202 318 fu7202 ARCH_FU7202 FU7202 318
...@@ -724,3 +724,66 @@ lpc22xx MACH_LPC22XX LPC22XX 715 ...@@ -724,3 +724,66 @@ lpc22xx MACH_LPC22XX LPC22XX 715
omap_comet3 MACH_COMET3 COMET3 716 omap_comet3 MACH_COMET3 COMET3 716
omap_comet4 MACH_COMET4 COMET4 717 omap_comet4 MACH_COMET4 COMET4 717
csb625 MACH_CSB625 CSB625 718 csb625 MACH_CSB625 CSB625 718
fortunet2 MACH_FORTUNET2 FORTUNET2 719
s5h2200 MACH_S5H2200 S5H2200 720
optorm920 MACH_OPTORM920 OPTORM920 721
adsbitsyxb MACH_ADSBITSYXB ADSBITSYXB 722
adssphere MACH_ADSSPHERE ADSSPHERE 723
adsportal MACH_ADSPORTAL ADSPORTAL 724
ln2410sbc MACH_LN2410SBC LN2410SBC 725
cb3rufc MACH_CB3RUFC CB3RUFC 726
mp2usb MACH_MP2USB MP2USB 727
ntnp425c MACH_NTNP425C NTNP425C 728
colibri MACH_COLIBRI COLIBRI 729
pcm7220 MACH_PCM7220 PCM7220 730
gateway7001 MACH_GATEWAY7001 GATEWAY7001 731
pcm027 MACH_PCM027 PCM027 732
cmpxa MACH_CMPXA CMPXA 733
anubis MACH_ANUBIS ANUBIS 734
ite8152 MACH_ITE8152 ITE8152 735
lpc3xxx MACH_LPC3XXX LPC3XXX 736
puppeteer MACH_PUPPETEER PUPPETEER 737
vt001 MACH_MACH_VADATECH MACH_VADATECH 738
e570 MACH_E570 E570 739
x50 MACH_X50 X50 740
recon MACH_RECON RECON 741
xboardgp8 MACH_XBOARDGP8 XBOARDGP8 742
fpic2 MACH_FPIC2 FPIC2 743
akita MACH_AKITA AKITA 744
a81 MACH_A81 A81 745
svm_sc25x MACH_SVM_SC25X SVM_SC25X 746
vt020 MACH_VADATECH020 VADATECH020 747
tli MACH_TLI TLI 748
edb9315lc MACH_EDB9315LC EDB9315LC 749
passec MACH_PASSEC PASSEC 750
ds_tiger MACH_DS_TIGER DS_TIGER 751
e310 MACH_E310 E310 752
e330 MACH_E330 E330 753
rt3000 MACH_RT3000 RT3000 754
nokia770 MACH_NOKIA770 NOKIA770 755
pnx0106 MACH_PNX0106 PNX0106 756
hx21xx MACH_HX21XX HX21XX 757
faraday MACH_FARADAY FARADAY 758
sbc9312 MACH_SBC9312 SBC9312 759
batman MACH_BATMAN BATMAN 760
jpd201 MACH_JPD201 JPD201 761
mipsa MACH_MIPSA MIPSA 762
kacom MACH_KACOM KACOM 763
swarcocpu MACH_SWARCOCPU SWARCOCPU 764
swarcodsl MACH_SWARCODSL SWARCODSL 765
blueangel MACH_BLUEANGEL BLUEANGEL 766
hairygrama MACH_HAIRYGRAMA HAIRYGRAMA 767
banff MACH_BANFF BANFF 768
carmeva MACH_CARMEVA CARMEVA 769
sam255 MACH_SAM255 SAM255 770
ppm10 MACH_PPM10 PPM10 771
edb9315a MACH_EDB9315A EDB9315A 772
sunset MACH_SUNSET SUNSET 773
stargate2 MACH_STARGATE2 STARGATE2 774
intelmote2 MACH_INTELMOTE2 INTELMOTE2 775
trizeps4 MACH_TRIZEPS4 TRIZEPS4 776
mainstone2 MACH_MAINSTONE2 MAINSTONE2 777
ez_ixp42x MACH_EZ_IXP42X EZ_IXP42X 778
tapwave_zodiac MACH_TAPWAVE_ZODIAC TAPWAVE_ZODIAC 779
universalmeter MACH_UNIVERSALMETER UNIVERSALMETER 780
hicoarm9 MACH_HICOARM9 HICOARM9 781
...@@ -127,48 +127,23 @@ static inline void prepare_singlestep(struct kprobe *p, struct pt_regs *regs) ...@@ -127,48 +127,23 @@ static inline void prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
regs->eip = (unsigned long)&p->ainsn.insn; regs->eip = (unsigned long)&p->ainsn.insn;
} }
struct task_struct *arch_get_kprobe_task(void *ptr)
{
return ((struct thread_info *) (((unsigned long) ptr) &
(~(THREAD_SIZE -1))))->task;
}
void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs)
{ {
unsigned long *sara = (unsigned long *)&regs->esp; unsigned long *sara = (unsigned long *)&regs->esp;
struct kretprobe_instance *ri; struct kretprobe_instance *ri;
static void *orig_ret_addr;
if ((ri = get_free_rp_inst(rp)) != NULL) {
ri->rp = rp;
ri->task = current;
ri->ret_addr = (kprobe_opcode_t *) *sara;
/*
* Save the return address when the return probe hits
* the first time, and use it to populate the (krprobe
* instance)->ret_addr for subsequent return probes at
* the same addrress since stack address would have
* the kretprobe_trampoline by then.
*/
if (((void*) *sara) != kretprobe_trampoline)
orig_ret_addr = (void*) *sara;
if ((ri = get_free_rp_inst(rp)) != NULL) {
ri->rp = rp;
ri->stack_addr = sara;
ri->ret_addr = orig_ret_addr;
add_rp_inst(ri);
/* Replace the return addr with trampoline addr */ /* Replace the return addr with trampoline addr */
*sara = (unsigned long) &kretprobe_trampoline; *sara = (unsigned long) &kretprobe_trampoline;
} else {
rp->nmissed++;
}
}
void arch_kprobe_flush_task(struct task_struct *tk) add_rp_inst(ri);
{ } else {
struct kretprobe_instance *ri; rp->nmissed++;
while ((ri = get_rp_inst_tsk(tk)) != NULL) { }
*((unsigned long *)(ri->stack_addr)) =
(unsigned long) ri->ret_addr;
recycle_rp_inst(ri);
}
} }
/* /*
...@@ -286,36 +261,59 @@ static int kprobe_handler(struct pt_regs *regs) ...@@ -286,36 +261,59 @@ static int kprobe_handler(struct pt_regs *regs)
*/ */
int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
{ {
struct task_struct *tsk; struct kretprobe_instance *ri = NULL;
struct kretprobe_instance *ri; struct hlist_head *head;
struct hlist_head *head; struct hlist_node *node, *tmp;
struct hlist_node *node; unsigned long orig_ret_address = 0;
unsigned long *sara = ((unsigned long *) &regs->esp) - 1; unsigned long trampoline_address =(unsigned long)&kretprobe_trampoline;
tsk = arch_get_kprobe_task(sara);
head = kretprobe_inst_table_head(tsk);
hlist_for_each_entry(ri, node, head, hlist) {
if (ri->stack_addr == sara && ri->rp) {
if (ri->rp->handler)
ri->rp->handler(ri, regs);
}
}
return 0;
}
void trampoline_post_handler(struct kprobe *p, struct pt_regs *regs, head = kretprobe_inst_table_head(current);
unsigned long flags)
{
struct kretprobe_instance *ri;
/* RA already popped */
unsigned long *sara = ((unsigned long *)&regs->esp) - 1;
while ((ri = get_rp_inst(sara))) { /*
regs->eip = (unsigned long)ri->ret_addr; * It is possible to have multiple instances associated with a given
* task either because an multiple functions in the call path
* have a return probe installed on them, and/or more then one return
* return probe was registered for a target function.
*
* We can handle this because:
* - instances are always inserted at the head of the list
* - when multiple return probes are registered for the same
* function, the first instance's ret_addr will point to the
* real return address, and all the rest will point to
* kretprobe_trampoline
*/
hlist_for_each_entry_safe(ri, node, tmp, head, hlist) {
if (ri->task != current)
/* another task is sharing our hash bucket */
continue;
if (ri->rp && ri->rp->handler)
ri->rp->handler(ri, regs);
orig_ret_address = (unsigned long)ri->ret_addr;
recycle_rp_inst(ri); recycle_rp_inst(ri);
if (orig_ret_address != trampoline_address)
/*
* This is the real return address. Any other
* instances associated with this task are for
* other calls deeper on the call stack
*/
break;
} }
regs->eflags &= ~TF_MASK;
BUG_ON(!orig_ret_address || (orig_ret_address == trampoline_address));
regs->eip = orig_ret_address;
unlock_kprobes();
preempt_enable_no_resched();
/*
* By returning a non-zero value, we are telling
* kprobe_handler() that we have handled unlocking
* and re-enabling preemption.
*/
return 1;
} }
/* /*
...@@ -403,8 +401,7 @@ static inline int post_kprobe_handler(struct pt_regs *regs) ...@@ -403,8 +401,7 @@ static inline int post_kprobe_handler(struct pt_regs *regs)
current_kprobe->post_handler(current_kprobe, regs, 0); current_kprobe->post_handler(current_kprobe, regs, 0);
} }
if (current_kprobe->post_handler != trampoline_post_handler) resume_execution(current_kprobe, regs);
resume_execution(current_kprobe, regs);
regs->eflags |= kprobe_saved_eflags; regs->eflags |= kprobe_saved_eflags;
/*Restore back the original saved kprobes variables and continue. */ /*Restore back the original saved kprobes variables and continue. */
...@@ -534,3 +531,13 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) ...@@ -534,3 +531,13 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
} }
return 0; return 0;
} }
static struct kprobe trampoline_p = {
.addr = (kprobe_opcode_t *) &kretprobe_trampoline,
.pre_handler = trampoline_probe_handler
};
int __init arch_init(void)
{
return register_kprobe(&trampoline_p);
}
...@@ -616,6 +616,33 @@ handle_io_bitmap(struct thread_struct *next, struct tss_struct *tss) ...@@ -616,6 +616,33 @@ handle_io_bitmap(struct thread_struct *next, struct tss_struct *tss)
tss->io_bitmap_base = INVALID_IO_BITMAP_OFFSET_LAZY; tss->io_bitmap_base = INVALID_IO_BITMAP_OFFSET_LAZY;
} }
/*
* This function selects if the context switch from prev to next
* has to tweak the TSC disable bit in the cr4.
*/
static inline void disable_tsc(struct task_struct *prev_p,
struct task_struct *next_p)
{
struct thread_info *prev, *next;
/*
* gcc should eliminate the ->thread_info dereference if
* has_secure_computing returns 0 at compile time (SECCOMP=n).
*/
prev = prev_p->thread_info;
next = next_p->thread_info;
if (has_secure_computing(prev) || has_secure_computing(next)) {
/* slow path here */
if (has_secure_computing(prev) &&
!has_secure_computing(next)) {
write_cr4(read_cr4() & ~X86_CR4_TSD);
} else if (!has_secure_computing(prev) &&
has_secure_computing(next))
write_cr4(read_cr4() | X86_CR4_TSD);
}
}
/* /*
* switch_to(x,yn) should switch tasks from x to y. * switch_to(x,yn) should switch tasks from x to y.
* *
...@@ -695,6 +722,8 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas ...@@ -695,6 +722,8 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
if (unlikely(prev->io_bitmap_ptr || next->io_bitmap_ptr)) if (unlikely(prev->io_bitmap_ptr || next->io_bitmap_ptr))
handle_io_bitmap(next, tss); handle_io_bitmap(next, tss);
disable_tsc(prev_p, next_p);
return prev_p; return prev_p;
} }
......
...@@ -289,3 +289,5 @@ ENTRY(sys_call_table) ...@@ -289,3 +289,5 @@ ENTRY(sys_call_table)
.long sys_add_key .long sys_add_key
.long sys_request_key .long sys_request_key
.long sys_keyctl .long sys_keyctl
.long sys_ioprio_set
.long sys_ioprio_get /* 290 */
...@@ -1577,8 +1577,8 @@ sys_call_table: ...@@ -1577,8 +1577,8 @@ sys_call_table:
data8 sys_add_key data8 sys_add_key
data8 sys_request_key data8 sys_request_key
data8 sys_keyctl data8 sys_keyctl
data8 sys_ni_syscall data8 sys_ioprio_set
data8 sys_ni_syscall // 1275 data8 sys_ioprio_get // 1275
data8 sys_set_zone_reclaim data8 sys_set_zone_reclaim
data8 sys_ni_syscall data8 sys_ni_syscall
data8 sys_ni_syscall data8 sys_ni_syscall
......
...@@ -34,6 +34,7 @@ ...@@ -34,6 +34,7 @@
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/kdebug.h> #include <asm/kdebug.h>
#include <asm/sections.h>
extern void jprobe_inst_return(void); extern void jprobe_inst_return(void);
...@@ -263,13 +264,33 @@ static inline void get_kprobe_inst(bundle_t *bundle, uint slot, ...@@ -263,13 +264,33 @@ static inline void get_kprobe_inst(bundle_t *bundle, uint slot,
} }
} }
/* Returns non-zero if the addr is in the Interrupt Vector Table */
static inline int in_ivt_functions(unsigned long addr)
{
return (addr >= (unsigned long)__start_ivt_text
&& addr < (unsigned long)__end_ivt_text);
}
static int valid_kprobe_addr(int template, int slot, unsigned long addr) static int valid_kprobe_addr(int template, int slot, unsigned long addr)
{ {
if ((slot > 2) || ((bundle_encoding[template][1] == L) && slot > 1)) { if ((slot > 2) || ((bundle_encoding[template][1] == L) && slot > 1)) {
printk(KERN_WARNING "Attempting to insert unaligned kprobe at 0x%lx\n", printk(KERN_WARNING "Attempting to insert unaligned kprobe "
addr); "at 0x%lx\n", addr);
return -EINVAL; return -EINVAL;
} }
if (in_ivt_functions(addr)) {
printk(KERN_WARNING "Kprobes can't be inserted inside "
"IVT functions at 0x%lx\n", addr);
return -EINVAL;
}
if (slot == 1 && bundle_encoding[template][1] != L) {
printk(KERN_WARNING "Inserting kprobes on slot #1 "
"is not supported\n");
return -EINVAL;
}
return 0; return 0;
} }
...@@ -290,6 +311,94 @@ static inline void set_current_kprobe(struct kprobe *p) ...@@ -290,6 +311,94 @@ static inline void set_current_kprobe(struct kprobe *p)
current_kprobe = p; current_kprobe = p;
} }
static void kretprobe_trampoline(void)
{
}
/*
* At this point the target function has been tricked into
* returning into our trampoline. Lookup the associated instance
* and then:
* - call the handler function
* - cleanup by marking the instance as unused
* - long jump back to the original return address
*/
int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
{
struct kretprobe_instance *ri = NULL;
struct hlist_head *head;
struct hlist_node *node, *tmp;
unsigned long orig_ret_address = 0;
unsigned long trampoline_address =
((struct fnptr *)kretprobe_trampoline)->ip;
head = kretprobe_inst_table_head(current);
/*
* It is possible to have multiple instances associated with a given
* task either because an multiple functions in the call path
* have a return probe installed on them, and/or more then one return
* return probe was registered for a target function.
*
* We can handle this because:
* - instances are always inserted at the head of the list
* - when multiple return probes are registered for the same
* function, the first instance's ret_addr will point to the
* real return address, and all the rest will point to
* kretprobe_trampoline
*/
hlist_for_each_entry_safe(ri, node, tmp, head, hlist) {
if (ri->task != current)
/* another task is sharing our hash bucket */
continue;
if (ri->rp && ri->rp->handler)
ri->rp->handler(ri, regs);
orig_ret_address = (unsigned long)ri->ret_addr;
recycle_rp_inst(ri);
if (orig_ret_address != trampoline_address)
/*
* This is the real return address. Any other
* instances associated with this task are for
* other calls deeper on the call stack
*/
break;
}
BUG_ON(!orig_ret_address || (orig_ret_address == trampoline_address));
regs->cr_iip = orig_ret_address;
unlock_kprobes();
preempt_enable_no_resched();
/*
* By returning a non-zero value, we are telling
* kprobe_handler() that we have handled unlocking
* and re-enabling preemption.
*/
return 1;
}
void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs)
{
struct kretprobe_instance *ri;
if ((ri = get_free_rp_inst(rp)) != NULL) {
ri->rp = rp;
ri->task = current;
ri->ret_addr = (kprobe_opcode_t *)regs->b0;
/* Replace the return addr with trampoline addr */
regs->b0 = ((struct fnptr *)kretprobe_trampoline)->ip;
add_rp_inst(ri);
} else {
rp->nmissed++;
}
}
int arch_prepare_kprobe(struct kprobe *p) int arch_prepare_kprobe(struct kprobe *p)
{ {
unsigned long addr = (unsigned long) p->addr; unsigned long addr = (unsigned long) p->addr;
...@@ -492,8 +601,8 @@ static int pre_kprobes_handler(struct die_args *args) ...@@ -492,8 +601,8 @@ static int pre_kprobes_handler(struct die_args *args)
if (p->pre_handler && p->pre_handler(p, regs)) if (p->pre_handler && p->pre_handler(p, regs))
/* /*
* Our pre-handler is specifically requesting that we just * Our pre-handler is specifically requesting that we just
* do a return. This is handling the case where the * do a return. This is used for both the jprobe pre-handler
* pre-handler is really our special jprobe pre-handler. * and the kretprobe trampoline
*/ */
return 1; return 1;
...@@ -599,3 +708,14 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) ...@@ -599,3 +708,14 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
*regs = jprobe_saved_regs; *regs = jprobe_saved_regs;
return 1; return 1;
} }
static struct kprobe trampoline_p = {
.pre_handler = trampoline_probe_handler
};
int __init arch_init(void)
{
trampoline_p.addr =
(kprobe_opcode_t *)((struct fnptr *)kretprobe_trampoline)->ip;
return register_kprobe(&trampoline_p);
}
...@@ -27,6 +27,7 @@ ...@@ -27,6 +27,7 @@
#include <linux/efi.h> #include <linux/efi.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/kprobes.h>
#include <asm/cpu.h> #include <asm/cpu.h>
#include <asm/delay.h> #include <asm/delay.h>
...@@ -707,6 +708,13 @@ kernel_thread_helper (int (*fn)(void *), void *arg) ...@@ -707,6 +708,13 @@ kernel_thread_helper (int (*fn)(void *), void *arg)
void void
flush_thread (void) flush_thread (void)
{ {
/*
* Remove function-return probe instances associated with this task
* and put them back on the free list. Do not insert an exit probe for
* this function, it will be disabled by kprobe_flush_task if you do.
*/
kprobe_flush_task(current);
/* drop floating-point and debug-register state if it exists: */ /* drop floating-point and debug-register state if it exists: */
current->thread.flags &= ~(IA64_THREAD_FPH_VALID | IA64_THREAD_DBG_VALID); current->thread.flags &= ~(IA64_THREAD_FPH_VALID | IA64_THREAD_DBG_VALID);
ia64_drop_fpu(current); ia64_drop_fpu(current);
...@@ -721,6 +729,14 @@ flush_thread (void) ...@@ -721,6 +729,14 @@ flush_thread (void)
void void
exit_thread (void) exit_thread (void)
{ {
/*
* Remove function-return probe instances associated with this task
* and put them back on the free list. Do not insert an exit probe for
* this function, it will be disabled by kprobe_flush_task if you do.
*/
kprobe_flush_task(current);
ia64_drop_fpu(current); ia64_drop_fpu(current);
#ifdef CONFIG_PERFMON #ifdef CONFIG_PERFMON
/* if needed, stop monitoring and flush state to perfmon context */ /* if needed, stop monitoring and flush state to perfmon context */
......
...@@ -8,6 +8,11 @@ ...@@ -8,6 +8,11 @@
#define LOAD_OFFSET (KERNEL_START - KERNEL_TR_PAGE_SIZE) #define LOAD_OFFSET (KERNEL_START - KERNEL_TR_PAGE_SIZE)
#include <asm-generic/vmlinux.lds.h> #include <asm-generic/vmlinux.lds.h>
#define IVT_TEXT \
VMLINUX_SYMBOL(__start_ivt_text) = .; \
*(.text.ivt) \
VMLINUX_SYMBOL(__end_ivt_text) = .;
OUTPUT_FORMAT("elf64-ia64-little") OUTPUT_FORMAT("elf64-ia64-little")
OUTPUT_ARCH(ia64) OUTPUT_ARCH(ia64)
ENTRY(phys_start) ENTRY(phys_start)
...@@ -39,7 +44,7 @@ SECTIONS ...@@ -39,7 +44,7 @@ SECTIONS
.text : AT(ADDR(.text) - LOAD_OFFSET) .text : AT(ADDR(.text) - LOAD_OFFSET)
{ {
*(.text.ivt) IVT_TEXT
*(.text) *(.text)
SCHED_TEXT SCHED_TEXT
LOCK_TEXT LOCK_TEXT
......
...@@ -457,7 +457,7 @@ static int do_signal(sigset_t *oldset, struct pt_regs *regs) ...@@ -457,7 +457,7 @@ static int do_signal(sigset_t *oldset, struct pt_regs *regs)
if (!user_mode(regs)) if (!user_mode(regs))
return 1; return 1;
if (try_to_freeze(0)) if (try_to_freeze())
goto no_signal; goto no_signal;
if (!oldset) if (!oldset)
......
...@@ -1449,3 +1449,5 @@ _GLOBAL(sys_call_table) ...@@ -1449,3 +1449,5 @@ _GLOBAL(sys_call_table)
.long sys_request_key /* 270 */ .long sys_request_key /* 270 */
.long sys_keyctl .long sys_keyctl
.long sys_waitid .long sys_waitid
.long sys_ioprio_set
.long sys_ioprio_get
...@@ -606,9 +606,19 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, ...@@ -606,9 +606,19 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
struct page *page = pfn_to_page(pfn); struct page *page = pfn_to_page(pfn);
if (!PageReserved(page) if (!PageReserved(page)
&& !test_bit(PG_arch_1, &page->flags)) { && !test_bit(PG_arch_1, &page->flags)) {
if (vma->vm_mm == current->active_mm) if (vma->vm_mm == current->active_mm) {
#ifdef CONFIG_8xx
/* On 8xx, cache control instructions (particularly
* "dcbst" from flush_dcache_icache) fault as write
* operation if there is an unpopulated TLB entry
* for the address in question. To workaround that,
* we invalidate the TLB here, thus avoiding dcbst
* misbehaviour.
*/
_tlbie(address);
#endif
__flush_dcache_icache((void *) address); __flush_dcache_icache((void *) address);
else } else
flush_dcache_icache_page(page); flush_dcache_icache_page(page);
set_bit(PG_arch_1, &page->flags); set_bit(PG_arch_1, &page->flags);
} }
......
...@@ -46,7 +46,7 @@ ...@@ -46,7 +46,7 @@
.section .text .section .text
.align 5 .align 5
#if defined(CONFIG_PMAC_PBOOK) || defined(CONFIG_CPU_FREQ_PMAC) #if defined(CONFIG_PM) || defined(CONFIG_CPU_FREQ_PMAC)
/* This gets called by via-pmu.c late during the sleep process. /* This gets called by via-pmu.c late during the sleep process.
* The PMU was already send the sleep command and will shut us down * The PMU was already send the sleep command and will shut us down
...@@ -382,7 +382,7 @@ turn_on_mmu: ...@@ -382,7 +382,7 @@ turn_on_mmu:
isync isync
rfi rfi
#endif /* defined(CONFIG_PMAC_PBOOK) || defined(CONFIG_CPU_FREQ) */ #endif /* defined(CONFIG_PM) || defined(CONFIG_CPU_FREQ) */
.section .data .section .data
.balign L1_CACHE_LINE_SIZE .balign L1_CACHE_LINE_SIZE
......
...@@ -206,7 +206,7 @@ via_calibrate_decr(void) ...@@ -206,7 +206,7 @@ via_calibrate_decr(void)
return 1; return 1;
} }
#ifdef CONFIG_PMAC_PBOOK #ifdef CONFIG_PM
/* /*
* Reset the time after a sleep. * Reset the time after a sleep.
*/ */
...@@ -238,7 +238,7 @@ time_sleep_notify(struct pmu_sleep_notifier *self, int when) ...@@ -238,7 +238,7 @@ time_sleep_notify(struct pmu_sleep_notifier *self, int when)
static struct pmu_sleep_notifier time_sleep_notifier __pmacdata = { static struct pmu_sleep_notifier time_sleep_notifier __pmacdata = {
time_sleep_notify, SLEEP_LEVEL_MISC, time_sleep_notify, SLEEP_LEVEL_MISC,
}; };
#endif /* CONFIG_PMAC_PBOOK */ #endif /* CONFIG_PM */
/* /*
* Query the OF and get the decr frequency. * Query the OF and get the decr frequency.
...@@ -251,9 +251,9 @@ pmac_calibrate_decr(void) ...@@ -251,9 +251,9 @@ pmac_calibrate_decr(void)
struct device_node *cpu; struct device_node *cpu;
unsigned int freq, *fp; unsigned int freq, *fp;
#ifdef CONFIG_PMAC_PBOOK #ifdef CONFIG_PM
pmu_register_sleep_notifier(&time_sleep_notifier); pmu_register_sleep_notifier(&time_sleep_notifier);
#endif /* CONFIG_PMAC_PBOOK */ #endif /* CONFIG_PM */
/* We assume MacRISC2 machines have correct device-tree /* We assume MacRISC2 machines have correct device-tree
* calibration. That's better since the VIA itself seems * calibration. That's better since the VIA itself seems
......
...@@ -324,6 +324,7 @@ sandpoint_setup_arch(void) ...@@ -324,6 +324,7 @@ sandpoint_setup_arch(void)
pdata[1].irq = 0; pdata[1].irq = 0;
pdata[1].mapbase = 0; pdata[1].mapbase = 0;
} }
}
printk(KERN_INFO "Motorola SPS Sandpoint Test Platform\n"); printk(KERN_INFO "Motorola SPS Sandpoint Test Platform\n");
printk(KERN_INFO "Port by MontaVista Software, Inc. (source@mvista.com)\n"); printk(KERN_INFO "Port by MontaVista Software, Inc. (source@mvista.com)\n");
......
...@@ -370,8 +370,9 @@ void __init openpic_init(int offset) ...@@ -370,8 +370,9 @@ void __init openpic_init(int offset)
/* Initialize IPI interrupts */ /* Initialize IPI interrupts */
if ( ppc_md.progress ) ppc_md.progress("openpic: ipi",0x3bb); if ( ppc_md.progress ) ppc_md.progress("openpic: ipi",0x3bb);
for (i = 0; i < OPENPIC_NUM_IPI; i++) { for (i = 0; i < OPENPIC_NUM_IPI; i++) {
/* Disabled, Priority 10..13 */ /* Disabled, increased priorities 10..13 */
openpic_initipi(i, 10+i, OPENPIC_VEC_IPI+i+offset); openpic_initipi(i, OPENPIC_PRIORITY_IPI_BASE+i,
OPENPIC_VEC_IPI+i+offset);
/* IPIs are per-CPU */ /* IPIs are per-CPU */
irq_desc[OPENPIC_VEC_IPI+i+offset].status |= IRQ_PER_CPU; irq_desc[OPENPIC_VEC_IPI+i+offset].status |= IRQ_PER_CPU;
irq_desc[OPENPIC_VEC_IPI+i+offset].handler = &open_pic_ipi; irq_desc[OPENPIC_VEC_IPI+i+offset].handler = &open_pic_ipi;
...@@ -399,8 +400,9 @@ void __init openpic_init(int offset) ...@@ -399,8 +400,9 @@ void __init openpic_init(int offset)
if (sense & IRQ_SENSE_MASK) if (sense & IRQ_SENSE_MASK)
irq_desc[i+offset].status = IRQ_LEVEL; irq_desc[i+offset].status = IRQ_LEVEL;
/* Enabled, Priority 8 */ /* Enabled, Default priority */
openpic_initirq(i, 8, i+offset, (sense & IRQ_POLARITY_MASK), openpic_initirq(i, OPENPIC_PRIORITY_DEFAULT, i+offset,
(sense & IRQ_POLARITY_MASK),
(sense & IRQ_SENSE_MASK)); (sense & IRQ_SENSE_MASK));
/* Processor 0 */ /* Processor 0 */
openpic_mapirq(i, CPU_MASK_CPU0, CPU_MASK_NONE); openpic_mapirq(i, CPU_MASK_CPU0, CPU_MASK_NONE);
...@@ -655,6 +657,18 @@ static void __init openpic_maptimer(u_int timer, cpumask_t cpumask) ...@@ -655,6 +657,18 @@ static void __init openpic_maptimer(u_int timer, cpumask_t cpumask)
cpus_addr(phys)[0]); cpus_addr(phys)[0]);
} }
/*
* Change the priority of an interrupt
*/
void __init
openpic_set_irq_priority(u_int irq, u_int pri)
{
check_arg_irq(irq);
openpic_safe_writefield(&ISR[irq - open_pic_irq_offset]->Vector_Priority,
OPENPIC_PRIORITY_MASK,
pri << OPENPIC_PRIORITY_SHIFT);
}
/* /*
* Initalize the interrupt source which will generate an NMI. * Initalize the interrupt source which will generate an NMI.
* This raises the interrupt's priority from 8 to 9. * This raises the interrupt's priority from 8 to 9.
...@@ -665,9 +679,7 @@ void __init ...@@ -665,9 +679,7 @@ void __init
openpic_init_nmi_irq(u_int irq) openpic_init_nmi_irq(u_int irq)
{ {
check_arg_irq(irq); check_arg_irq(irq);
openpic_safe_writefield(&ISR[irq - open_pic_irq_offset]->Vector_Priority, openpic_set_irq_priority(irq, OPENPIC_PRIORITY_NMI);
OPENPIC_PRIORITY_MASK,
9 << OPENPIC_PRIORITY_SHIFT);
} }
/* /*
......
...@@ -36,6 +36,8 @@ ...@@ -36,6 +36,8 @@
#include <asm/kdebug.h> #include <asm/kdebug.h>
#include <asm/sstep.h> #include <asm/sstep.h>
static DECLARE_MUTEX(kprobe_mutex);
static struct kprobe *current_kprobe; static struct kprobe *current_kprobe;
static unsigned long kprobe_status, kprobe_saved_msr; static unsigned long kprobe_status, kprobe_saved_msr;
static struct kprobe *kprobe_prev; static struct kprobe *kprobe_prev;
...@@ -54,6 +56,15 @@ int arch_prepare_kprobe(struct kprobe *p) ...@@ -54,6 +56,15 @@ int arch_prepare_kprobe(struct kprobe *p)
printk("Cannot register a kprobe on rfid or mtmsrd\n"); printk("Cannot register a kprobe on rfid or mtmsrd\n");
ret = -EINVAL; ret = -EINVAL;
} }
/* insn must be on a special executable page on ppc64 */
if (!ret) {
up(&kprobe_mutex);
p->ainsn.insn = get_insn_slot();
down(&kprobe_mutex);
if (!p->ainsn.insn)
ret = -ENOMEM;
}
return ret; return ret;
} }
...@@ -79,16 +90,22 @@ void arch_disarm_kprobe(struct kprobe *p) ...@@ -79,16 +90,22 @@ void arch_disarm_kprobe(struct kprobe *p)
void arch_remove_kprobe(struct kprobe *p) void arch_remove_kprobe(struct kprobe *p)
{ {
up(&kprobe_mutex);
free_insn_slot(p->ainsn.insn);
down(&kprobe_mutex);
} }
static inline void prepare_singlestep(struct kprobe *p, struct pt_regs *regs) static inline void prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
{ {
kprobe_opcode_t insn = *p->ainsn.insn;
regs->msr |= MSR_SE; regs->msr |= MSR_SE;
/*single step inline if it a breakpoint instruction*/
if (p->opcode == BREAKPOINT_INSTRUCTION) /* single step inline if it is a trap variant */
if (IS_TW(insn) || IS_TD(insn) || IS_TWI(insn) || IS_TDI(insn))
regs->nip = (unsigned long)p->addr; regs->nip = (unsigned long)p->addr;
else else
regs->nip = (unsigned long)&p->ainsn.insn; regs->nip = (unsigned long)p->ainsn.insn;
} }
static inline void save_previous_kprobe(void) static inline void save_previous_kprobe(void)
...@@ -105,6 +122,23 @@ static inline void restore_previous_kprobe(void) ...@@ -105,6 +122,23 @@ static inline void restore_previous_kprobe(void)
kprobe_saved_msr = kprobe_saved_msr_prev; kprobe_saved_msr = kprobe_saved_msr_prev;
} }
void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs)
{
struct kretprobe_instance *ri;
if ((ri = get_free_rp_inst(rp)) != NULL) {
ri->rp = rp;
ri->task = current;
ri->ret_addr = (kprobe_opcode_t *)regs->link;
/* Replace the return addr with trampoline addr */
regs->link = (unsigned long)kretprobe_trampoline;
add_rp_inst(ri);
} else {
rp->nmissed++;
}
}
static inline int kprobe_handler(struct pt_regs *regs) static inline int kprobe_handler(struct pt_regs *regs)
{ {
struct kprobe *p; struct kprobe *p;
...@@ -194,6 +228,78 @@ static inline int kprobe_handler(struct pt_regs *regs) ...@@ -194,6 +228,78 @@ static inline int kprobe_handler(struct pt_regs *regs)
return ret; return ret;
} }
/*
* Function return probe trampoline:
* - init_kprobes() establishes a probepoint here
* - When the probed function returns, this probe
* causes the handlers to fire
*/
void kretprobe_trampoline_holder(void)
{
asm volatile(".global kretprobe_trampoline\n"
"kretprobe_trampoline:\n"
"nop\n");
}
/*
* Called when the probe at kretprobe trampoline is hit
*/
int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
{
struct kretprobe_instance *ri = NULL;
struct hlist_head *head;
struct hlist_node *node, *tmp;
unsigned long orig_ret_address = 0;
unsigned long trampoline_address =(unsigned long)&kretprobe_trampoline;
head = kretprobe_inst_table_head(current);
/*
* It is possible to have multiple instances associated with a given
* task either because an multiple functions in the call path
* have a return probe installed on them, and/or more then one return
* return probe was registered for a target function.
*
* We can handle this because:
* - instances are always inserted at the head of the list
* - when multiple return probes are registered for the same
* function, the first instance's ret_addr will point to the
* real return address, and all the rest will point to
* kretprobe_trampoline
*/
hlist_for_each_entry_safe(ri, node, tmp, head, hlist) {
if (ri->task != current)
/* another task is sharing our hash bucket */
continue;
if (ri->rp && ri->rp->handler)
ri->rp->handler(ri, regs);
orig_ret_address = (unsigned long)ri->ret_addr;
recycle_rp_inst(ri);
if (orig_ret_address != trampoline_address)
/*
* This is the real return address. Any other
* instances associated with this task are for
* other calls deeper on the call stack
*/
break;
}
BUG_ON(!orig_ret_address || (orig_ret_address == trampoline_address));
regs->nip = orig_ret_address;
unlock_kprobes();
/*
* By returning a non-zero value, we are telling
* kprobe_handler() that we have handled unlocking
* and re-enabling preemption.
*/
return 1;
}
/* /*
* Called after single-stepping. p->addr is the address of the * Called after single-stepping. p->addr is the address of the
* instruction whose first byte has been replaced by the "breakpoint" * instruction whose first byte has been replaced by the "breakpoint"
...@@ -205,9 +311,10 @@ static inline int kprobe_handler(struct pt_regs *regs) ...@@ -205,9 +311,10 @@ static inline int kprobe_handler(struct pt_regs *regs)
static void resume_execution(struct kprobe *p, struct pt_regs *regs) static void resume_execution(struct kprobe *p, struct pt_regs *regs)
{ {
int ret; int ret;
unsigned int insn = *p->ainsn.insn;
regs->nip = (unsigned long)p->addr; regs->nip = (unsigned long)p->addr;
ret = emulate_step(regs, p->ainsn.insn[0]); ret = emulate_step(regs, insn);
if (ret == 0) if (ret == 0)
regs->nip = (unsigned long)p->addr + 4; regs->nip = (unsigned long)p->addr + 4;
} }
...@@ -331,3 +438,13 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) ...@@ -331,3 +438,13 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
memcpy(regs, &jprobe_saved_regs, sizeof(struct pt_regs)); memcpy(regs, &jprobe_saved_regs, sizeof(struct pt_regs));
return 1; return 1;
} }
static struct kprobe trampoline_p = {
.addr = (kprobe_opcode_t *) &kretprobe_trampoline,
.pre_handler = trampoline_probe_handler
};
int __init arch_init(void)
{
return register_kprobe(&trampoline_p);
}
...@@ -75,6 +75,7 @@ EXPORT_SYMBOL(giveup_fpu); ...@@ -75,6 +75,7 @@ EXPORT_SYMBOL(giveup_fpu);
EXPORT_SYMBOL(giveup_altivec); EXPORT_SYMBOL(giveup_altivec);
#endif #endif
EXPORT_SYMBOL(__flush_icache_range); EXPORT_SYMBOL(__flush_icache_range);
EXPORT_SYMBOL(flush_dcache_range);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#ifdef CONFIG_PPC_ISERIES #ifdef CONFIG_PPC_ISERIES
......
...@@ -36,6 +36,7 @@ ...@@ -36,6 +36,7 @@
#include <linux/kallsyms.h> #include <linux/kallsyms.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/utsname.h> #include <linux/utsname.h>
#include <linux/kprobes.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
...@@ -307,6 +308,8 @@ void show_regs(struct pt_regs * regs) ...@@ -307,6 +308,8 @@ void show_regs(struct pt_regs * regs)
void exit_thread(void) void exit_thread(void)
{ {
kprobe_flush_task(current);
#ifndef CONFIG_SMP #ifndef CONFIG_SMP
if (last_task_used_math == current) if (last_task_used_math == current)
last_task_used_math = NULL; last_task_used_math = NULL;
...@@ -321,6 +324,7 @@ void flush_thread(void) ...@@ -321,6 +324,7 @@ void flush_thread(void)
{ {
struct thread_info *t = current_thread_info(); struct thread_info *t = current_thread_info();
kprobe_flush_task(current);
if (t->flags & _TIF_ABI_PENDING) if (t->flags & _TIF_ABI_PENDING)
t->flags ^= (_TIF_ABI_PENDING | _TIF_32BIT); t->flags ^= (_TIF_ABI_PENDING | _TIF_32BIT);
......
...@@ -91,6 +91,7 @@ unsigned long tb_to_xs; ...@@ -91,6 +91,7 @@ unsigned long tb_to_xs;
unsigned tb_to_us; unsigned tb_to_us;
unsigned long processor_freq; unsigned long processor_freq;
DEFINE_SPINLOCK(rtc_lock); DEFINE_SPINLOCK(rtc_lock);
EXPORT_SYMBOL_GPL(rtc_lock);
unsigned long tb_to_ns_scale; unsigned long tb_to_ns_scale;
unsigned long tb_to_ns_shift; unsigned long tb_to_ns_shift;
......
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
#include <asm/ebus.h> #include <asm/ebus.h>
#include <asm/auxio.h> #include <asm/auxio.h>
/* This cannot be static, as it is referenced in entry.S */ /* This cannot be static, as it is referenced in irq.c */
void __iomem *auxio_register = NULL; void __iomem *auxio_register = NULL;
enum auxio_type { enum auxio_type {
......
...@@ -271,8 +271,9 @@ cplus_fptrap_insn_1: ...@@ -271,8 +271,9 @@ cplus_fptrap_insn_1:
fmuld %f0, %f2, %f26 fmuld %f0, %f2, %f26
faddd %f0, %f2, %f28 faddd %f0, %f2, %f28
fmuld %f0, %f2, %f30 fmuld %f0, %f2, %f30
membar #Sync
b,pt %xcc, fpdis_exit b,pt %xcc, fpdis_exit
membar #Sync nop
2: andcc %g5, FPRS_DU, %g0 2: andcc %g5, FPRS_DU, %g0
bne,pt %icc, 3f bne,pt %icc, 3f
fzero %f32 fzero %f32
...@@ -301,8 +302,9 @@ cplus_fptrap_insn_2: ...@@ -301,8 +302,9 @@ cplus_fptrap_insn_2:
fmuld %f32, %f34, %f58 fmuld %f32, %f34, %f58
faddd %f32, %f34, %f60 faddd %f32, %f34, %f60
fmuld %f32, %f34, %f62 fmuld %f32, %f34, %f62
membar #Sync
ba,pt %xcc, fpdis_exit ba,pt %xcc, fpdis_exit
membar #Sync nop
3: mov SECONDARY_CONTEXT, %g3 3: mov SECONDARY_CONTEXT, %g3
add %g6, TI_FPREGS, %g1 add %g6, TI_FPREGS, %g1
ldxa [%g3] ASI_DMMU, %g5 ldxa [%g3] ASI_DMMU, %g5
...@@ -699,116 +701,6 @@ utrap_ill: ...@@ -699,116 +701,6 @@ utrap_ill:
ba,pt %xcc, rtrap ba,pt %xcc, rtrap
clr %l6 clr %l6
#ifdef CONFIG_BLK_DEV_FD
.globl floppy_hardint
floppy_hardint:
wr %g0, (1 << 11), %clear_softint
sethi %hi(doing_pdma), %g1
ld [%g1 + %lo(doing_pdma)], %g2
brz,pn %g2, floppy_dosoftint
sethi %hi(fdc_status), %g3
ldx [%g3 + %lo(fdc_status)], %g3
sethi %hi(pdma_vaddr), %g5
ldx [%g5 + %lo(pdma_vaddr)], %g4
sethi %hi(pdma_size), %g5
ldx [%g5 + %lo(pdma_size)], %g5
next_byte:
lduba [%g3] ASI_PHYS_BYPASS_EC_E, %g7
andcc %g7, 0x80, %g0
be,pn %icc, floppy_fifo_emptied
andcc %g7, 0x20, %g0
be,pn %icc, floppy_overrun
andcc %g7, 0x40, %g0
be,pn %icc, floppy_write
sub %g5, 1, %g5
inc %g3
lduba [%g3] ASI_PHYS_BYPASS_EC_E, %g7
dec %g3
orcc %g0, %g5, %g0
stb %g7, [%g4]
bne,pn %xcc, next_byte
add %g4, 1, %g4
b,pt %xcc, floppy_tdone
nop
floppy_write:
ldub [%g4], %g7
orcc %g0, %g5, %g0
inc %g3
stba %g7, [%g3] ASI_PHYS_BYPASS_EC_E
dec %g3
bne,pn %xcc, next_byte
add %g4, 1, %g4
floppy_tdone:
sethi %hi(pdma_vaddr), %g1
stx %g4, [%g1 + %lo(pdma_vaddr)]
sethi %hi(pdma_size), %g1
stx %g5, [%g1 + %lo(pdma_size)]
sethi %hi(auxio_register), %g1
ldx [%g1 + %lo(auxio_register)], %g7
lduba [%g7] ASI_PHYS_BYPASS_EC_E, %g5
or %g5, AUXIO_AUX1_FTCNT, %g5
/* andn %g5, AUXIO_AUX1_MASK, %g5 */
stba %g5, [%g7] ASI_PHYS_BYPASS_EC_E
andn %g5, AUXIO_AUX1_FTCNT, %g5
/* andn %g5, AUXIO_AUX1_MASK, %g5 */
nop; nop; nop; nop; nop; nop;
nop; nop; nop; nop; nop; nop;
stba %g5, [%g7] ASI_PHYS_BYPASS_EC_E
sethi %hi(doing_pdma), %g1
b,pt %xcc, floppy_dosoftint
st %g0, [%g1 + %lo(doing_pdma)]
floppy_fifo_emptied:
sethi %hi(pdma_vaddr), %g1
stx %g4, [%g1 + %lo(pdma_vaddr)]
sethi %hi(pdma_size), %g1
stx %g5, [%g1 + %lo(pdma_size)]
sethi %hi(irq_action), %g1
or %g1, %lo(irq_action), %g1
ldx [%g1 + (11 << 3)], %g3 ! irqaction[floppy_irq]
ldx [%g3 + 0x08], %g4 ! action->flags>>48==ino
sethi %hi(ivector_table), %g3
srlx %g4, 48, %g4
or %g3, %lo(ivector_table), %g3
sllx %g4, 5, %g4
ldx [%g3 + %g4], %g4 ! &ivector_table[ino]
ldx [%g4 + 0x10], %g4 ! bucket->iclr
stwa %g0, [%g4] ASI_PHYS_BYPASS_EC_E ! ICLR_IDLE
membar #Sync ! probably not needed...
retry
floppy_overrun:
sethi %hi(pdma_vaddr), %g1
stx %g4, [%g1 + %lo(pdma_vaddr)]
sethi %hi(pdma_size), %g1
stx %g5, [%g1 + %lo(pdma_size)]
sethi %hi(doing_pdma), %g1
st %g0, [%g1 + %lo(doing_pdma)]
floppy_dosoftint:
rdpr %pil, %g2
wrpr %g0, 15, %pil
sethi %hi(109f), %g7
b,pt %xcc, etrap_irq
109: or %g7, %lo(109b), %g7
mov 11, %o0
mov 0, %o1
call sparc_floppy_irq
add %sp, PTREGS_OFF, %o2
b,pt %xcc, rtrap_irq
nop
#endif /* CONFIG_BLK_DEV_FD */
/* XXX Here is stuff we still need to write... -DaveM XXX */ /* XXX Here is stuff we still need to write... -DaveM XXX */
.globl netbsd_syscall .globl netbsd_syscall
netbsd_syscall: netbsd_syscall:
......
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/cpudata.h> #include <asm/cpudata.h>
#include <asm/auxio.h>
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static void distribute_irqs(void); static void distribute_irqs(void);
...@@ -834,137 +835,65 @@ void handler_irq(int irq, struct pt_regs *regs) ...@@ -834,137 +835,65 @@ void handler_irq(int irq, struct pt_regs *regs)
} }
#ifdef CONFIG_BLK_DEV_FD #ifdef CONFIG_BLK_DEV_FD
extern void floppy_interrupt(int irq, void *dev_cookie, struct pt_regs *regs); extern irqreturn_t floppy_interrupt(int, void *, struct pt_regs *);;
void sparc_floppy_irq(int irq, void *dev_cookie, struct pt_regs *regs) /* XXX No easy way to include asm/floppy.h XXX */
{ extern unsigned char *pdma_vaddr;
struct irqaction *action = *(irq + irq_action); extern unsigned long pdma_size;
struct ino_bucket *bucket; extern volatile int doing_pdma;
int cpu = smp_processor_id(); extern unsigned long fdc_status;
irq_enter();
kstat_this_cpu.irqs[irq]++;
*(irq_work(cpu, irq)) = 0;
bucket = get_ino_in_irqaction(action) + ivector_table;
bucket->flags |= IBF_INPROGRESS;
floppy_interrupt(irq, dev_cookie, regs);
upa_writel(ICLR_IDLE, bucket->iclr);
bucket->flags &= ~IBF_INPROGRESS;
irq_exit();
}
#endif
/* The following assumes that the branch lies before the place we
* are branching to. This is the case for a trap vector...
* You have been warned.
*/
#define SPARC_BRANCH(dest_addr, inst_addr) \
(0x10800000 | ((((dest_addr)-(inst_addr))>>2)&0x3fffff))
#define SPARC_NOP (0x01000000)
static void install_fast_irq(unsigned int cpu_irq, irqreturn_t sparc_floppy_irq(int irq, void *dev_cookie, struct pt_regs *regs)
irqreturn_t (*handler)(int, void *, struct pt_regs *))
{ {
extern unsigned long sparc64_ttable_tl0; if (likely(doing_pdma)) {
unsigned long ttent = (unsigned long) &sparc64_ttable_tl0; void __iomem *stat = (void __iomem *) fdc_status;
unsigned int *insns; unsigned char *vaddr = pdma_vaddr;
unsigned long size = pdma_size;
ttent += 0x820; u8 val;
ttent += (cpu_irq - 1) << 5;
insns = (unsigned int *) ttent; while (size) {
insns[0] = SPARC_BRANCH(((unsigned long) handler), val = readb(stat);
((unsigned long)&insns[0])); if (unlikely(!(val & 0x80))) {
insns[1] = SPARC_NOP; pdma_vaddr = vaddr;
__asm__ __volatile__("membar #StoreStore; flush %0" : : "r" (ttent)); pdma_size = size;
} return IRQ_HANDLED;
}
int request_fast_irq(unsigned int irq, if (unlikely(!(val & 0x20))) {
irqreturn_t (*handler)(int, void *, struct pt_regs *), pdma_vaddr = vaddr;
unsigned long irqflags, const char *name, void *dev_id) pdma_size = size;
{ doing_pdma = 0;
struct irqaction *action; goto main_interrupt;
struct ino_bucket *bucket = __bucket(irq); }
unsigned long flags; if (val & 0x40) {
/* read */
/* No pil0 dummy buckets allowed here. */ *vaddr++ = readb(stat + 1);
if (bucket < &ivector_table[0] || } else {
bucket >= &ivector_table[NUM_IVECS]) { unsigned char data = *vaddr++;
unsigned int *caller;
__asm__ __volatile__("mov %%i7, %0" : "=r" (caller));
printk(KERN_CRIT "request_fast_irq: Old style IRQ registry attempt "
"from %p, irq %08x.\n", caller, irq);
return -EINVAL;
}
if (!handler)
return -EINVAL;
if ((bucket->pil == 0) || (bucket->pil == 14)) { /* write */
printk("request_fast_irq: Trying to register shared IRQ 0 or 14.\n"); writeb(data, stat + 1);
return -EBUSY; }
} size--;
}
spin_lock_irqsave(&irq_action_lock, flags); pdma_vaddr = vaddr;
pdma_size = size;
action = *(bucket->pil + irq_action); /* Send Terminal Count pulse to floppy controller. */
if (action) { val = readb(auxio_register);
if (action->flags & SA_SHIRQ) val |= AUXIO_AUX1_FTCNT;
panic("Trying to register fast irq when already shared.\n"); writeb(val, auxio_register);
if (irqflags & SA_SHIRQ) val &= AUXIO_AUX1_FTCNT;
panic("Trying to register fast irq as shared.\n"); writeb(val, auxio_register);
printk("request_fast_irq: Trying to register yet already owned.\n");
spin_unlock_irqrestore(&irq_action_lock, flags);
return -EBUSY;
}
/* doing_pdma = 0;
* We do not check for SA_SAMPLE_RANDOM in this path. Neither do we
* support smp intr affinity in this path.
*/
if (irqflags & SA_STATIC_ALLOC) {
if (static_irq_count < MAX_STATIC_ALLOC)
action = &static_irqaction[static_irq_count++];
else
printk("Request for IRQ%d (%s) SA_STATIC_ALLOC failed "
"using kmalloc\n", bucket->pil, name);
}
if (action == NULL)
action = (struct irqaction *)kmalloc(sizeof(struct irqaction),
GFP_ATOMIC);
if (!action) {
spin_unlock_irqrestore(&irq_action_lock, flags);
return -ENOMEM;
} }
install_fast_irq(bucket->pil, handler);
bucket->irq_info = action; main_interrupt:
bucket->flags |= IBF_ACTIVE; return floppy_interrupt(irq, dev_cookie, regs);
action->handler = handler;
action->flags = irqflags;
action->dev_id = NULL;
action->name = name;
action->next = NULL;
put_ino_in_irqaction(action, irq);
put_smpaff_in_irqaction(action, CPU_MASK_NONE);
*(bucket->pil + irq_action) = action;
enable_irq(irq);
spin_unlock_irqrestore(&irq_action_lock, flags);
#ifdef CONFIG_SMP
distribute_irqs();
#endif
return 0;
} }
EXPORT_SYMBOL(sparc_floppy_irq);
#endif
/* We really don't need these at all on the Sparc. We only have /* We really don't need these at all on the Sparc. We only have
* stubs here because they are exported to modules. * stubs here because they are exported to modules.
......
...@@ -32,8 +32,9 @@ static __inline__ int __sem_update_count(struct semaphore *sem, int incr) ...@@ -32,8 +32,9 @@ static __inline__ int __sem_update_count(struct semaphore *sem, int incr)
" add %1, %4, %1\n" " add %1, %4, %1\n"
" cas [%3], %0, %1\n" " cas [%3], %0, %1\n"
" cmp %0, %1\n" " cmp %0, %1\n"
" membar #StoreLoad | #StoreStore\n"
" bne,pn %%icc, 1b\n" " bne,pn %%icc, 1b\n"
" membar #StoreLoad | #StoreStore\n" " nop\n"
: "=&r" (old_count), "=&r" (tmp), "=m" (sem->count) : "=&r" (old_count), "=&r" (tmp), "=m" (sem->count)
: "r" (&sem->count), "r" (incr), "m" (sem->count) : "r" (&sem->count), "r" (incr), "m" (sem->count)
: "cc"); : "cc");
...@@ -71,8 +72,9 @@ void up(struct semaphore *sem) ...@@ -71,8 +72,9 @@ void up(struct semaphore *sem)
" cmp %%g1, %%g7\n" " cmp %%g1, %%g7\n"
" bne,pn %%icc, 1b\n" " bne,pn %%icc, 1b\n"
" addcc %%g7, 1, %%g0\n" " addcc %%g7, 1, %%g0\n"
" membar #StoreLoad | #StoreStore\n"
" ble,pn %%icc, 3f\n" " ble,pn %%icc, 3f\n"
" membar #StoreLoad | #StoreStore\n" " nop\n"
"2:\n" "2:\n"
" .subsection 2\n" " .subsection 2\n"
"3: mov %0, %%g1\n" "3: mov %0, %%g1\n"
...@@ -128,8 +130,9 @@ void __sched down(struct semaphore *sem) ...@@ -128,8 +130,9 @@ void __sched down(struct semaphore *sem)
" cmp %%g1, %%g7\n" " cmp %%g1, %%g7\n"
" bne,pn %%icc, 1b\n" " bne,pn %%icc, 1b\n"
" cmp %%g7, 1\n" " cmp %%g7, 1\n"
" membar #StoreLoad | #StoreStore\n"
" bl,pn %%icc, 3f\n" " bl,pn %%icc, 3f\n"
" membar #StoreLoad | #StoreStore\n" " nop\n"
"2:\n" "2:\n"
" .subsection 2\n" " .subsection 2\n"
"3: mov %0, %%g1\n" "3: mov %0, %%g1\n"
...@@ -233,8 +236,9 @@ int __sched down_interruptible(struct semaphore *sem) ...@@ -233,8 +236,9 @@ int __sched down_interruptible(struct semaphore *sem)
" cmp %%g1, %%g7\n" " cmp %%g1, %%g7\n"
" bne,pn %%icc, 1b\n" " bne,pn %%icc, 1b\n"
" cmp %%g7, 1\n" " cmp %%g7, 1\n"
" membar #StoreLoad | #StoreStore\n"
" bl,pn %%icc, 3f\n" " bl,pn %%icc, 3f\n"
" membar #StoreLoad | #StoreStore\n" " nop\n"
"2:\n" "2:\n"
" .subsection 2\n" " .subsection 2\n"
"3: mov %2, %%g1\n" "3: mov %2, %%g1\n"
......
...@@ -227,7 +227,6 @@ EXPORT_SYMBOL(__flush_dcache_range); ...@@ -227,7 +227,6 @@ EXPORT_SYMBOL(__flush_dcache_range);
EXPORT_SYMBOL(mostek_lock); EXPORT_SYMBOL(mostek_lock);
EXPORT_SYMBOL(mstk48t02_regs); EXPORT_SYMBOL(mstk48t02_regs);
EXPORT_SYMBOL(request_fast_irq);
#ifdef CONFIG_SUN_AUXIO #ifdef CONFIG_SUN_AUXIO
EXPORT_SYMBOL(auxio_set_led); EXPORT_SYMBOL(auxio_set_led);
EXPORT_SYMBOL(auxio_set_lte); EXPORT_SYMBOL(auxio_set_lte);
......
...@@ -98,8 +98,9 @@ startup_continue: ...@@ -98,8 +98,9 @@ startup_continue:
sethi %hi(prom_entry_lock), %g2 sethi %hi(prom_entry_lock), %g2
1: ldstub [%g2 + %lo(prom_entry_lock)], %g1 1: ldstub [%g2 + %lo(prom_entry_lock)], %g1
membar #StoreLoad | #StoreStore
brnz,pn %g1, 1b brnz,pn %g1, 1b
membar #StoreLoad | #StoreStore nop
sethi %hi(p1275buf), %g2 sethi %hi(p1275buf), %g2
or %g2, %lo(p1275buf), %g2 or %g2, %lo(p1275buf), %g2
......
...@@ -87,14 +87,17 @@ ...@@ -87,14 +87,17 @@
#define LOOP_CHUNK3(src, dest, len, branch_dest) \ #define LOOP_CHUNK3(src, dest, len, branch_dest) \
MAIN_LOOP_CHUNK(src, dest, f32, f48, len, branch_dest) MAIN_LOOP_CHUNK(src, dest, f32, f48, len, branch_dest)
#define DO_SYNC membar #Sync;
#define STORE_SYNC(dest, fsrc) \ #define STORE_SYNC(dest, fsrc) \
EX_ST(STORE_BLK(%fsrc, %dest)); \ EX_ST(STORE_BLK(%fsrc, %dest)); \
add %dest, 0x40, %dest; add %dest, 0x40, %dest; \
DO_SYNC
#define STORE_JUMP(dest, fsrc, target) \ #define STORE_JUMP(dest, fsrc, target) \
EX_ST(STORE_BLK(%fsrc, %dest)); \ EX_ST(STORE_BLK(%fsrc, %dest)); \
add %dest, 0x40, %dest; \ add %dest, 0x40, %dest; \
ba,pt %xcc, target; ba,pt %xcc, target; \
nop;
#define FINISH_VISCHUNK(dest, f0, f1, left) \ #define FINISH_VISCHUNK(dest, f0, f1, left) \
subcc %left, 8, %left;\ subcc %left, 8, %left;\
...@@ -239,17 +242,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ ...@@ -239,17 +242,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */
ba,pt %xcc, 1b+4 ba,pt %xcc, 1b+4
faligndata %f0, %f2, %f48 faligndata %f0, %f2, %f48
1: FREG_FROB(f16,f18,f20,f22,f24,f26,f28,f30,f32) 1: FREG_FROB(f16,f18,f20,f22,f24,f26,f28,f30,f32)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f32,f34,f36,f38,f40,f42,f44,f46,f0) FREG_FROB(f32,f34,f36,f38,f40,f42,f44,f46,f0)
STORE_JUMP(o0, f48, 40f) membar #Sync STORE_JUMP(o0, f48, 40f)
2: FREG_FROB(f32,f34,f36,f38,f40,f42,f44,f46,f0) 2: FREG_FROB(f32,f34,f36,f38,f40,f42,f44,f46,f0)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f0, f2, f4, f6, f8, f10,f12,f14,f16) FREG_FROB(f0, f2, f4, f6, f8, f10,f12,f14,f16)
STORE_JUMP(o0, f48, 48f) membar #Sync STORE_JUMP(o0, f48, 48f)
3: FREG_FROB(f0, f2, f4, f6, f8, f10,f12,f14,f16) 3: FREG_FROB(f0, f2, f4, f6, f8, f10,f12,f14,f16)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f16,f18,f20,f22,f24,f26,f28,f30,f32) FREG_FROB(f16,f18,f20,f22,f24,f26,f28,f30,f32)
STORE_JUMP(o0, f48, 56f) membar #Sync STORE_JUMP(o0, f48, 56f)
1: FREG_FROB(f2, f4, f6, f8, f10,f12,f14,f16,f18) 1: FREG_FROB(f2, f4, f6, f8, f10,f12,f14,f16,f18)
LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f) LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
...@@ -260,17 +263,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ ...@@ -260,17 +263,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */
ba,pt %xcc, 1b+4 ba,pt %xcc, 1b+4
faligndata %f2, %f4, %f48 faligndata %f2, %f4, %f48
1: FREG_FROB(f18,f20,f22,f24,f26,f28,f30,f32,f34) 1: FREG_FROB(f18,f20,f22,f24,f26,f28,f30,f32,f34)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f34,f36,f38,f40,f42,f44,f46,f0, f2) FREG_FROB(f34,f36,f38,f40,f42,f44,f46,f0, f2)
STORE_JUMP(o0, f48, 41f) membar #Sync STORE_JUMP(o0, f48, 41f)
2: FREG_FROB(f34,f36,f38,f40,f42,f44,f46,f0, f2) 2: FREG_FROB(f34,f36,f38,f40,f42,f44,f46,f0, f2)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f2, f4, f6, f8, f10,f12,f14,f16,f18) FREG_FROB(f2, f4, f6, f8, f10,f12,f14,f16,f18)
STORE_JUMP(o0, f48, 49f) membar #Sync STORE_JUMP(o0, f48, 49f)
3: FREG_FROB(f2, f4, f6, f8, f10,f12,f14,f16,f18) 3: FREG_FROB(f2, f4, f6, f8, f10,f12,f14,f16,f18)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f18,f20,f22,f24,f26,f28,f30,f32,f34) FREG_FROB(f18,f20,f22,f24,f26,f28,f30,f32,f34)
STORE_JUMP(o0, f48, 57f) membar #Sync STORE_JUMP(o0, f48, 57f)
1: FREG_FROB(f4, f6, f8, f10,f12,f14,f16,f18,f20) 1: FREG_FROB(f4, f6, f8, f10,f12,f14,f16,f18,f20)
LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f) LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
...@@ -281,17 +284,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ ...@@ -281,17 +284,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */
ba,pt %xcc, 1b+4 ba,pt %xcc, 1b+4
faligndata %f4, %f6, %f48 faligndata %f4, %f6, %f48
1: FREG_FROB(f20,f22,f24,f26,f28,f30,f32,f34,f36) 1: FREG_FROB(f20,f22,f24,f26,f28,f30,f32,f34,f36)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f36,f38,f40,f42,f44,f46,f0, f2, f4) FREG_FROB(f36,f38,f40,f42,f44,f46,f0, f2, f4)
STORE_JUMP(o0, f48, 42f) membar #Sync STORE_JUMP(o0, f48, 42f)
2: FREG_FROB(f36,f38,f40,f42,f44,f46,f0, f2, f4) 2: FREG_FROB(f36,f38,f40,f42,f44,f46,f0, f2, f4)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f4, f6, f8, f10,f12,f14,f16,f18,f20) FREG_FROB(f4, f6, f8, f10,f12,f14,f16,f18,f20)
STORE_JUMP(o0, f48, 50f) membar #Sync STORE_JUMP(o0, f48, 50f)
3: FREG_FROB(f4, f6, f8, f10,f12,f14,f16,f18,f20) 3: FREG_FROB(f4, f6, f8, f10,f12,f14,f16,f18,f20)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f20,f22,f24,f26,f28,f30,f32,f34,f36) FREG_FROB(f20,f22,f24,f26,f28,f30,f32,f34,f36)
STORE_JUMP(o0, f48, 58f) membar #Sync STORE_JUMP(o0, f48, 58f)
1: FREG_FROB(f6, f8, f10,f12,f14,f16,f18,f20,f22) 1: FREG_FROB(f6, f8, f10,f12,f14,f16,f18,f20,f22)
LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f) LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
...@@ -302,17 +305,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ ...@@ -302,17 +305,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */
ba,pt %xcc, 1b+4 ba,pt %xcc, 1b+4
faligndata %f6, %f8, %f48 faligndata %f6, %f8, %f48
1: FREG_FROB(f22,f24,f26,f28,f30,f32,f34,f36,f38) 1: FREG_FROB(f22,f24,f26,f28,f30,f32,f34,f36,f38)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f38,f40,f42,f44,f46,f0, f2, f4, f6) FREG_FROB(f38,f40,f42,f44,f46,f0, f2, f4, f6)
STORE_JUMP(o0, f48, 43f) membar #Sync STORE_JUMP(o0, f48, 43f)
2: FREG_FROB(f38,f40,f42,f44,f46,f0, f2, f4, f6) 2: FREG_FROB(f38,f40,f42,f44,f46,f0, f2, f4, f6)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f6, f8, f10,f12,f14,f16,f18,f20,f22) FREG_FROB(f6, f8, f10,f12,f14,f16,f18,f20,f22)
STORE_JUMP(o0, f48, 51f) membar #Sync STORE_JUMP(o0, f48, 51f)
3: FREG_FROB(f6, f8, f10,f12,f14,f16,f18,f20,f22) 3: FREG_FROB(f6, f8, f10,f12,f14,f16,f18,f20,f22)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f22,f24,f26,f28,f30,f32,f34,f36,f38) FREG_FROB(f22,f24,f26,f28,f30,f32,f34,f36,f38)
STORE_JUMP(o0, f48, 59f) membar #Sync STORE_JUMP(o0, f48, 59f)
1: FREG_FROB(f8, f10,f12,f14,f16,f18,f20,f22,f24) 1: FREG_FROB(f8, f10,f12,f14,f16,f18,f20,f22,f24)
LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f) LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
...@@ -323,17 +326,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ ...@@ -323,17 +326,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */
ba,pt %xcc, 1b+4 ba,pt %xcc, 1b+4
faligndata %f8, %f10, %f48 faligndata %f8, %f10, %f48
1: FREG_FROB(f24,f26,f28,f30,f32,f34,f36,f38,f40) 1: FREG_FROB(f24,f26,f28,f30,f32,f34,f36,f38,f40)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f40,f42,f44,f46,f0, f2, f4, f6, f8) FREG_FROB(f40,f42,f44,f46,f0, f2, f4, f6, f8)
STORE_JUMP(o0, f48, 44f) membar #Sync STORE_JUMP(o0, f48, 44f)
2: FREG_FROB(f40,f42,f44,f46,f0, f2, f4, f6, f8) 2: FREG_FROB(f40,f42,f44,f46,f0, f2, f4, f6, f8)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f8, f10,f12,f14,f16,f18,f20,f22,f24) FREG_FROB(f8, f10,f12,f14,f16,f18,f20,f22,f24)
STORE_JUMP(o0, f48, 52f) membar #Sync STORE_JUMP(o0, f48, 52f)
3: FREG_FROB(f8, f10,f12,f14,f16,f18,f20,f22,f24) 3: FREG_FROB(f8, f10,f12,f14,f16,f18,f20,f22,f24)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f24,f26,f28,f30,f32,f34,f36,f38,f40) FREG_FROB(f24,f26,f28,f30,f32,f34,f36,f38,f40)
STORE_JUMP(o0, f48, 60f) membar #Sync STORE_JUMP(o0, f48, 60f)
1: FREG_FROB(f10,f12,f14,f16,f18,f20,f22,f24,f26) 1: FREG_FROB(f10,f12,f14,f16,f18,f20,f22,f24,f26)
LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f) LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
...@@ -344,17 +347,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ ...@@ -344,17 +347,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */
ba,pt %xcc, 1b+4 ba,pt %xcc, 1b+4
faligndata %f10, %f12, %f48 faligndata %f10, %f12, %f48
1: FREG_FROB(f26,f28,f30,f32,f34,f36,f38,f40,f42) 1: FREG_FROB(f26,f28,f30,f32,f34,f36,f38,f40,f42)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f42,f44,f46,f0, f2, f4, f6, f8, f10) FREG_FROB(f42,f44,f46,f0, f2, f4, f6, f8, f10)
STORE_JUMP(o0, f48, 45f) membar #Sync STORE_JUMP(o0, f48, 45f)
2: FREG_FROB(f42,f44,f46,f0, f2, f4, f6, f8, f10) 2: FREG_FROB(f42,f44,f46,f0, f2, f4, f6, f8, f10)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f10,f12,f14,f16,f18,f20,f22,f24,f26) FREG_FROB(f10,f12,f14,f16,f18,f20,f22,f24,f26)
STORE_JUMP(o0, f48, 53f) membar #Sync STORE_JUMP(o0, f48, 53f)
3: FREG_FROB(f10,f12,f14,f16,f18,f20,f22,f24,f26) 3: FREG_FROB(f10,f12,f14,f16,f18,f20,f22,f24,f26)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f26,f28,f30,f32,f34,f36,f38,f40,f42) FREG_FROB(f26,f28,f30,f32,f34,f36,f38,f40,f42)
STORE_JUMP(o0, f48, 61f) membar #Sync STORE_JUMP(o0, f48, 61f)
1: FREG_FROB(f12,f14,f16,f18,f20,f22,f24,f26,f28) 1: FREG_FROB(f12,f14,f16,f18,f20,f22,f24,f26,f28)
LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f) LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
...@@ -365,17 +368,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ ...@@ -365,17 +368,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */
ba,pt %xcc, 1b+4 ba,pt %xcc, 1b+4
faligndata %f12, %f14, %f48 faligndata %f12, %f14, %f48
1: FREG_FROB(f28,f30,f32,f34,f36,f38,f40,f42,f44) 1: FREG_FROB(f28,f30,f32,f34,f36,f38,f40,f42,f44)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f44,f46,f0, f2, f4, f6, f8, f10,f12) FREG_FROB(f44,f46,f0, f2, f4, f6, f8, f10,f12)
STORE_JUMP(o0, f48, 46f) membar #Sync STORE_JUMP(o0, f48, 46f)
2: FREG_FROB(f44,f46,f0, f2, f4, f6, f8, f10,f12) 2: FREG_FROB(f44,f46,f0, f2, f4, f6, f8, f10,f12)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f12,f14,f16,f18,f20,f22,f24,f26,f28) FREG_FROB(f12,f14,f16,f18,f20,f22,f24,f26,f28)
STORE_JUMP(o0, f48, 54f) membar #Sync STORE_JUMP(o0, f48, 54f)
3: FREG_FROB(f12,f14,f16,f18,f20,f22,f24,f26,f28) 3: FREG_FROB(f12,f14,f16,f18,f20,f22,f24,f26,f28)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f28,f30,f32,f34,f36,f38,f40,f42,f44) FREG_FROB(f28,f30,f32,f34,f36,f38,f40,f42,f44)
STORE_JUMP(o0, f48, 62f) membar #Sync STORE_JUMP(o0, f48, 62f)
1: FREG_FROB(f14,f16,f18,f20,f22,f24,f26,f28,f30) 1: FREG_FROB(f14,f16,f18,f20,f22,f24,f26,f28,f30)
LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f) LOOP_CHUNK1(o1, o0, GLOBAL_SPARE, 1f)
...@@ -386,17 +389,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ ...@@ -386,17 +389,17 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */
ba,pt %xcc, 1b+4 ba,pt %xcc, 1b+4
faligndata %f14, %f16, %f48 faligndata %f14, %f16, %f48
1: FREG_FROB(f30,f32,f34,f36,f38,f40,f42,f44,f46) 1: FREG_FROB(f30,f32,f34,f36,f38,f40,f42,f44,f46)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f46,f0, f2, f4, f6, f8, f10,f12,f14) FREG_FROB(f46,f0, f2, f4, f6, f8, f10,f12,f14)
STORE_JUMP(o0, f48, 47f) membar #Sync STORE_JUMP(o0, f48, 47f)
2: FREG_FROB(f46,f0, f2, f4, f6, f8, f10,f12,f14) 2: FREG_FROB(f46,f0, f2, f4, f6, f8, f10,f12,f14)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f14,f16,f18,f20,f22,f24,f26,f28,f30) FREG_FROB(f14,f16,f18,f20,f22,f24,f26,f28,f30)
STORE_JUMP(o0, f48, 55f) membar #Sync STORE_JUMP(o0, f48, 55f)
3: FREG_FROB(f14,f16,f18,f20,f22,f24,f26,f28,f30) 3: FREG_FROB(f14,f16,f18,f20,f22,f24,f26,f28,f30)
STORE_SYNC(o0, f48) membar #Sync STORE_SYNC(o0, f48)
FREG_FROB(f30,f32,f34,f36,f38,f40,f42,f44,f46) FREG_FROB(f30,f32,f34,f36,f38,f40,f42,f44,f46)
STORE_JUMP(o0, f48, 63f) membar #Sync STORE_JUMP(o0, f48, 63f)
40: FINISH_VISCHUNK(o0, f0, f2, g3) 40: FINISH_VISCHUNK(o0, f0, f2, g3)
41: FINISH_VISCHUNK(o0, f2, f4, g3) 41: FINISH_VISCHUNK(o0, f2, f4, g3)
......
...@@ -72,7 +72,11 @@ vis1: ldub [%g6 + TI_FPSAVED], %g3 ...@@ -72,7 +72,11 @@ vis1: ldub [%g6 + TI_FPSAVED], %g3
stda %f48, [%g3 + %g1] ASI_BLK_P stda %f48, [%g3 + %g1] ASI_BLK_P
5: membar #Sync 5: membar #Sync
jmpl %g7 + %g0, %g0 ba,pt %xcc, 80f
nop
.align 32
80: jmpl %g7 + %g0, %g0
nop nop
6: ldub [%g3 + TI_FPSAVED], %o5 6: ldub [%g3 + TI_FPSAVED], %o5
...@@ -87,8 +91,11 @@ vis1: ldub [%g6 + TI_FPSAVED], %g3 ...@@ -87,8 +91,11 @@ vis1: ldub [%g6 + TI_FPSAVED], %g3
stda %f32, [%g2 + %g1] ASI_BLK_P stda %f32, [%g2 + %g1] ASI_BLK_P
stda %f48, [%g3 + %g1] ASI_BLK_P stda %f48, [%g3 + %g1] ASI_BLK_P
membar #Sync membar #Sync
jmpl %g7 + %g0, %g0 ba,pt %xcc, 80f
nop
.align 32
80: jmpl %g7 + %g0, %g0
nop nop
.align 32 .align 32
...@@ -126,6 +133,10 @@ VISenterhalf: ...@@ -126,6 +133,10 @@ VISenterhalf:
stda %f0, [%g2 + %g1] ASI_BLK_P stda %f0, [%g2 + %g1] ASI_BLK_P
stda %f16, [%g3 + %g1] ASI_BLK_P stda %f16, [%g3 + %g1] ASI_BLK_P
membar #Sync membar #Sync
ba,pt %xcc, 4f
nop
.align 32
4: and %o5, FPRS_DU, %o5 4: and %o5, FPRS_DU, %o5
jmpl %g7 + %g0, %g0 jmpl %g7 + %g0, %g0
wr %o5, FPRS_FEF, %fprs wr %o5, FPRS_FEF, %fprs
...@@ -7,18 +7,6 @@ ...@@ -7,18 +7,6 @@
#include <linux/config.h> #include <linux/config.h>
#include <asm/asi.h> #include <asm/asi.h>
/* On SMP we need to use memory barriers to ensure
* correct memory operation ordering, nop these out
* for uniprocessor.
*/
#ifdef CONFIG_SMP
#define ATOMIC_PRE_BARRIER membar #StoreLoad | #LoadLoad
#define ATOMIC_POST_BARRIER membar #StoreLoad | #StoreStore
#else
#define ATOMIC_PRE_BARRIER nop
#define ATOMIC_POST_BARRIER nop
#endif
.text .text
/* Two versions of the atomic routines, one that /* Two versions of the atomic routines, one that
...@@ -52,6 +40,24 @@ atomic_sub: /* %o0 = decrement, %o1 = atomic_ptr */ ...@@ -52,6 +40,24 @@ atomic_sub: /* %o0 = decrement, %o1 = atomic_ptr */
nop nop
.size atomic_sub, .-atomic_sub .size atomic_sub, .-atomic_sub
/* On SMP we need to use memory barriers to ensure
* correct memory operation ordering, nop these out
* for uniprocessor.
*/
#ifdef CONFIG_SMP
#define ATOMIC_PRE_BARRIER membar #StoreLoad | #LoadLoad;
#define ATOMIC_POST_BARRIER \
ba,pt %xcc, 80b; \
membar #StoreLoad | #StoreStore
80: retl
nop
#else
#define ATOMIC_PRE_BARRIER
#define ATOMIC_POST_BARRIER
#endif
.globl atomic_add_ret .globl atomic_add_ret
.type atomic_add_ret,#function .type atomic_add_ret,#function
atomic_add_ret: /* %o0 = increment, %o1 = atomic_ptr */ atomic_add_ret: /* %o0 = increment, %o1 = atomic_ptr */
...@@ -62,9 +68,10 @@ atomic_add_ret: /* %o0 = increment, %o1 = atomic_ptr */ ...@@ -62,9 +68,10 @@ atomic_add_ret: /* %o0 = increment, %o1 = atomic_ptr */
cmp %g1, %g7 cmp %g1, %g7
bne,pn %icc, 1b bne,pn %icc, 1b
add %g7, %o0, %g7 add %g7, %o0, %g7
sra %g7, 0, %o0
ATOMIC_POST_BARRIER ATOMIC_POST_BARRIER
retl retl
sra %g7, 0, %o0 nop
.size atomic_add_ret, .-atomic_add_ret .size atomic_add_ret, .-atomic_add_ret
.globl atomic_sub_ret .globl atomic_sub_ret
...@@ -77,9 +84,10 @@ atomic_sub_ret: /* %o0 = decrement, %o1 = atomic_ptr */ ...@@ -77,9 +84,10 @@ atomic_sub_ret: /* %o0 = decrement, %o1 = atomic_ptr */
cmp %g1, %g7 cmp %g1, %g7
bne,pn %icc, 1b bne,pn %icc, 1b
sub %g7, %o0, %g7 sub %g7, %o0, %g7
sra %g7, 0, %o0
ATOMIC_POST_BARRIER ATOMIC_POST_BARRIER
retl retl
sra %g7, 0, %o0 nop
.size atomic_sub_ret, .-atomic_sub_ret .size atomic_sub_ret, .-atomic_sub_ret
.globl atomic64_add .globl atomic64_add
...@@ -118,9 +126,10 @@ atomic64_add_ret: /* %o0 = increment, %o1 = atomic_ptr */ ...@@ -118,9 +126,10 @@ atomic64_add_ret: /* %o0 = increment, %o1 = atomic_ptr */
cmp %g1, %g7 cmp %g1, %g7
bne,pn %xcc, 1b bne,pn %xcc, 1b
add %g7, %o0, %g7 add %g7, %o0, %g7
mov %g7, %o0
ATOMIC_POST_BARRIER ATOMIC_POST_BARRIER
retl retl
mov %g7, %o0 nop
.size atomic64_add_ret, .-atomic64_add_ret .size atomic64_add_ret, .-atomic64_add_ret
.globl atomic64_sub_ret .globl atomic64_sub_ret
...@@ -133,7 +142,8 @@ atomic64_sub_ret: /* %o0 = decrement, %o1 = atomic_ptr */ ...@@ -133,7 +142,8 @@ atomic64_sub_ret: /* %o0 = decrement, %o1 = atomic_ptr */
cmp %g1, %g7 cmp %g1, %g7
bne,pn %xcc, 1b bne,pn %xcc, 1b
sub %g7, %o0, %g7 sub %g7, %o0, %g7
mov %g7, %o0
ATOMIC_POST_BARRIER ATOMIC_POST_BARRIER
retl retl
mov %g7, %o0 nop
.size atomic64_sub_ret, .-atomic64_sub_ret .size atomic64_sub_ret, .-atomic64_sub_ret
...@@ -7,20 +7,26 @@ ...@@ -7,20 +7,26 @@
#include <linux/config.h> #include <linux/config.h>
#include <asm/asi.h> #include <asm/asi.h>
.text
/* On SMP we need to use memory barriers to ensure /* On SMP we need to use memory barriers to ensure
* correct memory operation ordering, nop these out * correct memory operation ordering, nop these out
* for uniprocessor. * for uniprocessor.
*/ */
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#define BITOP_PRE_BARRIER membar #StoreLoad | #LoadLoad #define BITOP_PRE_BARRIER membar #StoreLoad | #LoadLoad
#define BITOP_POST_BARRIER membar #StoreLoad | #StoreStore #define BITOP_POST_BARRIER \
ba,pt %xcc, 80b; \
membar #StoreLoad | #StoreStore
80: retl
nop
#else #else
#define BITOP_PRE_BARRIER nop #define BITOP_PRE_BARRIER
#define BITOP_POST_BARRIER nop #define BITOP_POST_BARRIER
#endif #endif
.text
.globl test_and_set_bit .globl test_and_set_bit
.type test_and_set_bit,#function .type test_and_set_bit,#function
test_and_set_bit: /* %o0=nr, %o1=addr */ test_and_set_bit: /* %o0=nr, %o1=addr */
...@@ -37,10 +43,11 @@ test_and_set_bit: /* %o0=nr, %o1=addr */ ...@@ -37,10 +43,11 @@ test_and_set_bit: /* %o0=nr, %o1=addr */
cmp %g7, %g1 cmp %g7, %g1
bne,pn %xcc, 1b bne,pn %xcc, 1b
and %g7, %o2, %g2 and %g7, %o2, %g2
BITOP_POST_BARRIER
clr %o0 clr %o0
movrne %g2, 1, %o0
BITOP_POST_BARRIER
retl retl
movrne %g2, 1, %o0 nop
.size test_and_set_bit, .-test_and_set_bit .size test_and_set_bit, .-test_and_set_bit
.globl test_and_clear_bit .globl test_and_clear_bit
...@@ -59,10 +66,11 @@ test_and_clear_bit: /* %o0=nr, %o1=addr */ ...@@ -59,10 +66,11 @@ test_and_clear_bit: /* %o0=nr, %o1=addr */
cmp %g7, %g1 cmp %g7, %g1
bne,pn %xcc, 1b bne,pn %xcc, 1b
and %g7, %o2, %g2 and %g7, %o2, %g2
BITOP_POST_BARRIER
clr %o0 clr %o0
movrne %g2, 1, %o0
BITOP_POST_BARRIER
retl retl
movrne %g2, 1, %o0 nop
.size test_and_clear_bit, .-test_and_clear_bit .size test_and_clear_bit, .-test_and_clear_bit
.globl test_and_change_bit .globl test_and_change_bit
...@@ -81,10 +89,11 @@ test_and_change_bit: /* %o0=nr, %o1=addr */ ...@@ -81,10 +89,11 @@ test_and_change_bit: /* %o0=nr, %o1=addr */
cmp %g7, %g1 cmp %g7, %g1
bne,pn %xcc, 1b bne,pn %xcc, 1b
and %g7, %o2, %g2 and %g7, %o2, %g2
BITOP_POST_BARRIER
clr %o0 clr %o0
movrne %g2, 1, %o0
BITOP_POST_BARRIER
retl retl
movrne %g2, 1, %o0 nop
.size test_and_change_bit, .-test_and_change_bit .size test_and_change_bit, .-test_and_change_bit
.globl set_bit .globl set_bit
......
...@@ -252,8 +252,9 @@ void _do_write_lock (rwlock_t *rw, char *str) ...@@ -252,8 +252,9 @@ void _do_write_lock (rwlock_t *rw, char *str)
" andn %%g1, %%g3, %%g7\n" " andn %%g1, %%g3, %%g7\n"
" casx [%0], %%g1, %%g7\n" " casx [%0], %%g1, %%g7\n"
" cmp %%g1, %%g7\n" " cmp %%g1, %%g7\n"
" membar #StoreLoad | #StoreStore\n"
" bne,pn %%xcc, 1b\n" " bne,pn %%xcc, 1b\n"
" membar #StoreLoad | #StoreStore" " nop"
: /* no outputs */ : /* no outputs */
: "r" (&(rw->lock)) : "r" (&(rw->lock))
: "g3", "g1", "g7", "cc", "memory"); : "g3", "g1", "g7", "cc", "memory");
...@@ -351,8 +352,9 @@ int _do_write_trylock (rwlock_t *rw, char *str) ...@@ -351,8 +352,9 @@ int _do_write_trylock (rwlock_t *rw, char *str)
" andn %%g1, %%g3, %%g7\n" " andn %%g1, %%g3, %%g7\n"
" casx [%0], %%g1, %%g7\n" " casx [%0], %%g1, %%g7\n"
" cmp %%g1, %%g7\n" " cmp %%g1, %%g7\n"
" membar #StoreLoad | #StoreStore\n"
" bne,pn %%xcc, 1b\n" " bne,pn %%xcc, 1b\n"
" membar #StoreLoad | #StoreStore" " nop"
: /* no outputs */ : /* no outputs */
: "r" (&(rw->lock)) : "r" (&(rw->lock))
: "g3", "g1", "g7", "cc", "memory"); : "g3", "g1", "g7", "cc", "memory");
......
...@@ -48,8 +48,9 @@ start_to_zero: ...@@ -48,8 +48,9 @@ start_to_zero:
#endif #endif
to_zero: to_zero:
ldstub [%o1], %g3 ldstub [%o1], %g3
membar #StoreLoad | #StoreStore
brnz,pn %g3, spin_on_lock brnz,pn %g3, spin_on_lock
membar #StoreLoad | #StoreStore nop
loop2: cas [%o0], %g2, %g7 /* ASSERT(g7 == 0) */ loop2: cas [%o0], %g2, %g7 /* ASSERT(g7 == 0) */
cmp %g2, %g7 cmp %g2, %g7
...@@ -71,8 +72,9 @@ loop2: cas [%o0], %g2, %g7 /* ASSERT(g7 == 0) */ ...@@ -71,8 +72,9 @@ loop2: cas [%o0], %g2, %g7 /* ASSERT(g7 == 0) */
nop nop
spin_on_lock: spin_on_lock:
ldub [%o1], %g3 ldub [%o1], %g3
membar #LoadLoad
brnz,pt %g3, spin_on_lock brnz,pt %g3, spin_on_lock
membar #LoadLoad nop
ba,pt %xcc, to_zero ba,pt %xcc, to_zero
nop nop
nop nop
...@@ -17,8 +17,9 @@ __down_read: ...@@ -17,8 +17,9 @@ __down_read:
bne,pn %icc, 1b bne,pn %icc, 1b
add %g7, 1, %g7 add %g7, 1, %g7
cmp %g7, 0 cmp %g7, 0
membar #StoreLoad | #StoreStore
bl,pn %icc, 3f bl,pn %icc, 3f
membar #StoreLoad | #StoreStore nop
2: 2:
retl retl
nop nop
...@@ -57,8 +58,9 @@ __down_write: ...@@ -57,8 +58,9 @@ __down_write:
cmp %g3, %g7 cmp %g3, %g7
bne,pn %icc, 1b bne,pn %icc, 1b
cmp %g7, 0 cmp %g7, 0
membar #StoreLoad | #StoreStore
bne,pn %icc, 3f bne,pn %icc, 3f
membar #StoreLoad | #StoreStore nop
2: retl 2: retl
nop nop
3: 3:
...@@ -97,8 +99,9 @@ __up_read: ...@@ -97,8 +99,9 @@ __up_read:
cmp %g1, %g7 cmp %g1, %g7
bne,pn %icc, 1b bne,pn %icc, 1b
cmp %g7, 0 cmp %g7, 0
membar #StoreLoad | #StoreStore
bl,pn %icc, 3f bl,pn %icc, 3f
membar #StoreLoad | #StoreStore nop
2: retl 2: retl
nop nop
3: sethi %hi(RWSEM_ACTIVE_MASK), %g1 3: sethi %hi(RWSEM_ACTIVE_MASK), %g1
...@@ -126,8 +129,9 @@ __up_write: ...@@ -126,8 +129,9 @@ __up_write:
bne,pn %icc, 1b bne,pn %icc, 1b
sub %g7, %g1, %g7 sub %g7, %g1, %g7
cmp %g7, 0 cmp %g7, 0
membar #StoreLoad | #StoreStore
bl,pn %icc, 3f bl,pn %icc, 3f
membar #StoreLoad | #StoreStore nop
2: 2:
retl retl
nop nop
...@@ -151,8 +155,9 @@ __downgrade_write: ...@@ -151,8 +155,9 @@ __downgrade_write:
bne,pn %icc, 1b bne,pn %icc, 1b
sub %g7, %g1, %g7 sub %g7, %g1, %g7
cmp %g7, 0 cmp %g7, 0
membar #StoreLoad | #StoreStore
bl,pn %icc, 3f bl,pn %icc, 3f
membar #StoreLoad | #StoreStore nop
2: 2:
retl retl
nop nop
......
...@@ -136,8 +136,9 @@ static __inline__ void set_dcache_dirty(struct page *page, int this_cpu) ...@@ -136,8 +136,9 @@ static __inline__ void set_dcache_dirty(struct page *page, int this_cpu)
"or %%g1, %0, %%g1\n\t" "or %%g1, %0, %%g1\n\t"
"casx [%2], %%g7, %%g1\n\t" "casx [%2], %%g7, %%g1\n\t"
"cmp %%g7, %%g1\n\t" "cmp %%g7, %%g1\n\t"
"membar #StoreLoad | #StoreStore\n\t"
"bne,pn %%xcc, 1b\n\t" "bne,pn %%xcc, 1b\n\t"
" membar #StoreLoad | #StoreStore" " nop"
: /* no outputs */ : /* no outputs */
: "r" (mask), "r" (non_cpu_bits), "r" (&page->flags) : "r" (mask), "r" (non_cpu_bits), "r" (&page->flags)
: "g1", "g7"); : "g1", "g7");
...@@ -157,8 +158,9 @@ static __inline__ void clear_dcache_dirty_cpu(struct page *page, unsigned long c ...@@ -157,8 +158,9 @@ static __inline__ void clear_dcache_dirty_cpu(struct page *page, unsigned long c
" andn %%g7, %1, %%g1\n\t" " andn %%g7, %1, %%g1\n\t"
"casx [%2], %%g7, %%g1\n\t" "casx [%2], %%g7, %%g1\n\t"
"cmp %%g7, %%g1\n\t" "cmp %%g7, %%g1\n\t"
"membar #StoreLoad | #StoreStore\n\t"
"bne,pn %%xcc, 1b\n\t" "bne,pn %%xcc, 1b\n\t"
" membar #StoreLoad | #StoreStore\n" " nop\n"
"2:" "2:"
: /* no outputs */ : /* no outputs */
: "r" (cpu), "r" (mask), "r" (&page->flags), : "r" (cpu), "r" (mask), "r" (&page->flags),
......
...@@ -266,8 +266,9 @@ __cheetah_flush_tlb_pending: /* 22 insns */ ...@@ -266,8 +266,9 @@ __cheetah_flush_tlb_pending: /* 22 insns */
andn %o3, 1, %o3 andn %o3, 1, %o3
stxa %g0, [%o3] ASI_IMMU_DEMAP stxa %g0, [%o3] ASI_IMMU_DEMAP
2: stxa %g0, [%o3] ASI_DMMU_DEMAP 2: stxa %g0, [%o3] ASI_DMMU_DEMAP
membar #Sync
brnz,pt %o1, 1b brnz,pt %o1, 1b
membar #Sync nop
stxa %g2, [%o4] ASI_DMMU stxa %g2, [%o4] ASI_DMMU
flush %g6 flush %g6
wrpr %g0, 0, %tl wrpr %g0, 0, %tl
......
...@@ -38,7 +38,7 @@ ...@@ -38,7 +38,7 @@
#include <linux/string.h> #include <linux/string.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/preempt.h> #include <linux/preempt.h>
#include <linux/moduleloader.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/kdebug.h> #include <asm/kdebug.h>
...@@ -51,8 +51,6 @@ static struct kprobe *kprobe_prev; ...@@ -51,8 +51,6 @@ static struct kprobe *kprobe_prev;
static unsigned long kprobe_status_prev, kprobe_old_rflags_prev, kprobe_saved_rflags_prev; static unsigned long kprobe_status_prev, kprobe_old_rflags_prev, kprobe_saved_rflags_prev;
static struct pt_regs jprobe_saved_regs; static struct pt_regs jprobe_saved_regs;
static long *jprobe_saved_rsp; static long *jprobe_saved_rsp;
static kprobe_opcode_t *get_insn_slot(void);
static void free_insn_slot(kprobe_opcode_t *slot);
void jprobe_return_end(void); void jprobe_return_end(void);
/* copy of the kernel stack at the probe fire time */ /* copy of the kernel stack at the probe fire time */
...@@ -274,48 +272,23 @@ static void prepare_singlestep(struct kprobe *p, struct pt_regs *regs) ...@@ -274,48 +272,23 @@ static void prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
regs->rip = (unsigned long)p->ainsn.insn; regs->rip = (unsigned long)p->ainsn.insn;
} }
struct task_struct *arch_get_kprobe_task(void *ptr)
{
return ((struct thread_info *) (((unsigned long) ptr) &
(~(THREAD_SIZE -1))))->task;
}
void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs)
{ {
unsigned long *sara = (unsigned long *)regs->rsp; unsigned long *sara = (unsigned long *)regs->rsp;
struct kretprobe_instance *ri; struct kretprobe_instance *ri;
static void *orig_ret_addr;
if ((ri = get_free_rp_inst(rp)) != NULL) {
ri->rp = rp;
ri->task = current;
ri->ret_addr = (kprobe_opcode_t *) *sara;
/*
* Save the return address when the return probe hits
* the first time, and use it to populate the (krprobe
* instance)->ret_addr for subsequent return probes at
* the same addrress since stack address would have
* the kretprobe_trampoline by then.
*/
if (((void*) *sara) != kretprobe_trampoline)
orig_ret_addr = (void*) *sara;
if ((ri = get_free_rp_inst(rp)) != NULL) {
ri->rp = rp;
ri->stack_addr = sara;
ri->ret_addr = orig_ret_addr;
add_rp_inst(ri);
/* Replace the return addr with trampoline addr */ /* Replace the return addr with trampoline addr */
*sara = (unsigned long) &kretprobe_trampoline; *sara = (unsigned long) &kretprobe_trampoline;
} else {
rp->nmissed++;
}
}
void arch_kprobe_flush_task(struct task_struct *tk) add_rp_inst(ri);
{ } else {
struct kretprobe_instance *ri; rp->nmissed++;
while ((ri = get_rp_inst_tsk(tk)) != NULL) { }
*((unsigned long *)(ri->stack_addr)) =
(unsigned long) ri->ret_addr;
recycle_rp_inst(ri);
}
} }
/* /*
...@@ -428,36 +401,59 @@ int kprobe_handler(struct pt_regs *regs) ...@@ -428,36 +401,59 @@ int kprobe_handler(struct pt_regs *regs)
*/ */
int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
{ {
struct task_struct *tsk; struct kretprobe_instance *ri = NULL;
struct kretprobe_instance *ri; struct hlist_head *head;
struct hlist_head *head; struct hlist_node *node, *tmp;
struct hlist_node *node; unsigned long orig_ret_address = 0;
unsigned long *sara = (unsigned long *)regs->rsp - 1; unsigned long trampoline_address =(unsigned long)&kretprobe_trampoline;
tsk = arch_get_kprobe_task(sara);
head = kretprobe_inst_table_head(tsk);
hlist_for_each_entry(ri, node, head, hlist) {
if (ri->stack_addr == sara && ri->rp) {
if (ri->rp->handler)
ri->rp->handler(ri, regs);
}
}
return 0;
}
void trampoline_post_handler(struct kprobe *p, struct pt_regs *regs, head = kretprobe_inst_table_head(current);
unsigned long flags)
{
struct kretprobe_instance *ri;
/* RA already popped */
unsigned long *sara = ((unsigned long *)regs->rsp) - 1;
while ((ri = get_rp_inst(sara))) { /*
regs->rip = (unsigned long)ri->ret_addr; * It is possible to have multiple instances associated with a given
* task either because an multiple functions in the call path
* have a return probe installed on them, and/or more then one return
* return probe was registered for a target function.
*
* We can handle this because:
* - instances are always inserted at the head of the list
* - when multiple return probes are registered for the same
* function, the first instance's ret_addr will point to the
* real return address, and all the rest will point to
* kretprobe_trampoline
*/
hlist_for_each_entry_safe(ri, node, tmp, head, hlist) {
if (ri->task != current)
/* another task is sharing our hash bucket */
continue;
if (ri->rp && ri->rp->handler)
ri->rp->handler(ri, regs);
orig_ret_address = (unsigned long)ri->ret_addr;
recycle_rp_inst(ri); recycle_rp_inst(ri);
if (orig_ret_address != trampoline_address)
/*
* This is the real return address. Any other
* instances associated with this task are for
* other calls deeper on the call stack
*/
break;
} }
regs->eflags &= ~TF_MASK;
BUG_ON(!orig_ret_address || (orig_ret_address == trampoline_address));
regs->rip = orig_ret_address;
unlock_kprobes();
preempt_enable_no_resched();
/*
* By returning a non-zero value, we are telling
* kprobe_handler() that we have handled unlocking
* and re-enabling preemption.
*/
return 1;
} }
/* /*
...@@ -550,8 +546,7 @@ int post_kprobe_handler(struct pt_regs *regs) ...@@ -550,8 +546,7 @@ int post_kprobe_handler(struct pt_regs *regs)
current_kprobe->post_handler(current_kprobe, regs, 0); current_kprobe->post_handler(current_kprobe, regs, 0);
} }
if (current_kprobe->post_handler != trampoline_post_handler) resume_execution(current_kprobe, regs);
resume_execution(current_kprobe, regs);
regs->eflags |= kprobe_saved_rflags; regs->eflags |= kprobe_saved_rflags;
/* Restore the original saved kprobes variables and continue. */ /* Restore the original saved kprobes variables and continue. */
...@@ -682,111 +677,12 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) ...@@ -682,111 +677,12 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
return 0; return 0;
} }
/* static struct kprobe trampoline_p = {
* kprobe->ainsn.insn points to the copy of the instruction to be single-stepped. .addr = (kprobe_opcode_t *) &kretprobe_trampoline,
* By default on x86_64, pages we get from kmalloc or vmalloc are not .pre_handler = trampoline_probe_handler
* executable. Single-stepping an instruction on such a page yields an
* oops. So instead of storing the instruction copies in their respective
* kprobe objects, we allocate a page, map it executable, and store all the
* instruction copies there. (We can allocate additional pages if somebody
* inserts a huge number of probes.) Each page can hold up to INSNS_PER_PAGE
* instruction slots, each of which is MAX_INSN_SIZE*sizeof(kprobe_opcode_t)
* bytes.
*/
#define INSNS_PER_PAGE (PAGE_SIZE/(MAX_INSN_SIZE*sizeof(kprobe_opcode_t)))
struct kprobe_insn_page {
struct hlist_node hlist;
kprobe_opcode_t *insns; /* page of instruction slots */
char slot_used[INSNS_PER_PAGE];
int nused;
}; };
static struct hlist_head kprobe_insn_pages; int __init arch_init(void)
/**
* get_insn_slot() - Find a slot on an executable page for an instruction.
* We allocate an executable page if there's no room on existing ones.
*/
static kprobe_opcode_t *get_insn_slot(void)
{
struct kprobe_insn_page *kip;
struct hlist_node *pos;
hlist_for_each(pos, &kprobe_insn_pages) {
kip = hlist_entry(pos, struct kprobe_insn_page, hlist);
if (kip->nused < INSNS_PER_PAGE) {
int i;
for (i = 0; i < INSNS_PER_PAGE; i++) {
if (!kip->slot_used[i]) {
kip->slot_used[i] = 1;
kip->nused++;
return kip->insns + (i*MAX_INSN_SIZE);
}
}
/* Surprise! No unused slots. Fix kip->nused. */
kip->nused = INSNS_PER_PAGE;
}
}
/* All out of space. Need to allocate a new page. Use slot 0.*/
kip = kmalloc(sizeof(struct kprobe_insn_page), GFP_KERNEL);
if (!kip) {
return NULL;
}
/*
* For the %rip-relative displacement fixups to be doable, we
* need our instruction copy to be within +/- 2GB of any data it
* might access via %rip. That is, within 2GB of where the
* kernel image and loaded module images reside. So we allocate
* a page in the module loading area.
*/
kip->insns = module_alloc(PAGE_SIZE);
if (!kip->insns) {
kfree(kip);
return NULL;
}
INIT_HLIST_NODE(&kip->hlist);
hlist_add_head(&kip->hlist, &kprobe_insn_pages);
memset(kip->slot_used, 0, INSNS_PER_PAGE);
kip->slot_used[0] = 1;
kip->nused = 1;
return kip->insns;
}
/**
* free_insn_slot() - Free instruction slot obtained from get_insn_slot().
*/
static void free_insn_slot(kprobe_opcode_t *slot)
{ {
struct kprobe_insn_page *kip; return register_kprobe(&trampoline_p);
struct hlist_node *pos;
hlist_for_each(pos, &kprobe_insn_pages) {
kip = hlist_entry(pos, struct kprobe_insn_page, hlist);
if (kip->insns <= slot
&& slot < kip->insns+(INSNS_PER_PAGE*MAX_INSN_SIZE)) {
int i = (slot - kip->insns) / MAX_INSN_SIZE;
kip->slot_used[i] = 0;
kip->nused--;
if (kip->nused == 0) {
/*
* Page is no longer in use. Free it unless
* it's the last one. We keep the last one
* so as not to have to set it up again the
* next time somebody inserts a probe.
*/
hlist_del(&kip->hlist);
if (hlist_empty(&kprobe_insn_pages)) {
INIT_HLIST_NODE(&kip->hlist);
hlist_add_head(&kip->hlist,
&kprobe_insn_pages);
} else {
module_free(NULL, kip->insns);
kfree(kip);
}
}
return;
}
}
} }
...@@ -481,6 +481,33 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long rsp, ...@@ -481,6 +481,33 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long rsp,
return err; return err;
} }
/*
* This function selects if the context switch from prev to next
* has to tweak the TSC disable bit in the cr4.
*/
static inline void disable_tsc(struct task_struct *prev_p,
struct task_struct *next_p)
{
struct thread_info *prev, *next;
/*
* gcc should eliminate the ->thread_info dereference if
* has_secure_computing returns 0 at compile time (SECCOMP=n).
*/
prev = prev_p->thread_info;
next = next_p->thread_info;
if (has_secure_computing(prev) || has_secure_computing(next)) {
/* slow path here */
if (has_secure_computing(prev) &&
!has_secure_computing(next)) {
write_cr4(read_cr4() & ~X86_CR4_TSD);
} else if (!has_secure_computing(prev) &&
has_secure_computing(next))
write_cr4(read_cr4() | X86_CR4_TSD);
}
}
/* /*
* This special macro can be used to load a debugging register * This special macro can be used to load a debugging register
*/ */
...@@ -599,6 +626,8 @@ struct task_struct *__switch_to(struct task_struct *prev_p, struct task_struct * ...@@ -599,6 +626,8 @@ struct task_struct *__switch_to(struct task_struct *prev_p, struct task_struct *
} }
} }
disable_tsc(prev_p, next_p);
return prev_p; return prev_p;
} }
......
...@@ -1806,7 +1806,8 @@ static void as_put_request(request_queue_t *q, struct request *rq) ...@@ -1806,7 +1806,8 @@ static void as_put_request(request_queue_t *q, struct request *rq)
rq->elevator_private = NULL; rq->elevator_private = NULL;
} }
static int as_set_request(request_queue_t *q, struct request *rq, int gfp_mask) static int as_set_request(request_queue_t *q, struct request *rq,
struct bio *bio, int gfp_mask)
{ {
struct as_data *ad = q->elevator->elevator_data; struct as_data *ad = q->elevator->elevator_data;
struct as_rq *arq = mempool_alloc(ad->arq_pool, gfp_mask); struct as_rq *arq = mempool_alloc(ad->arq_pool, gfp_mask);
...@@ -1827,7 +1828,7 @@ static int as_set_request(request_queue_t *q, struct request *rq, int gfp_mask) ...@@ -1827,7 +1828,7 @@ static int as_set_request(request_queue_t *q, struct request *rq, int gfp_mask)
return 1; return 1;
} }
static int as_may_queue(request_queue_t *q, int rw) static int as_may_queue(request_queue_t *q, int rw, struct bio *bio)
{ {
int ret = ELV_MQUEUE_MAY; int ret = ELV_MQUEUE_MAY;
struct as_data *ad = q->elevator->elevator_data; struct as_data *ad = q->elevator->elevator_data;
......
/* /*
* Disk Array driver for HP SA 5xxx and 6xxx Controllers * Disk Array driver for HP SA 5xxx and 6xxx Controllers
* Copyright 2000, 2002 Hewlett-Packard Development Company, L.P. * Copyright 2000, 2005 Hewlett-Packard Development Company, L.P.
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by * it under the terms of the GNU General Public License as published by
...@@ -54,7 +54,7 @@ ...@@ -54,7 +54,7 @@
MODULE_AUTHOR("Hewlett-Packard Company"); MODULE_AUTHOR("Hewlett-Packard Company");
MODULE_DESCRIPTION("Driver for HP Controller SA5xxx SA6xxx version 2.6.6"); MODULE_DESCRIPTION("Driver for HP Controller SA5xxx SA6xxx version 2.6.6");
MODULE_SUPPORTED_DEVICE("HP SA5i SA5i+ SA532 SA5300 SA5312 SA641 SA642 SA6400" MODULE_SUPPORTED_DEVICE("HP SA5i SA5i+ SA532 SA5300 SA5312 SA641 SA642 SA6400"
" SA6i P600 P800 E400"); " SA6i P600 P800 E400 E300");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
#include "cciss_cmd.h" #include "cciss_cmd.h"
...@@ -85,8 +85,10 @@ static const struct pci_device_id cciss_pci_device_id[] = { ...@@ -85,8 +85,10 @@ static const struct pci_device_id cciss_pci_device_id[] = {
0x103C, 0x3225, 0, 0, 0}, 0x103C, 0x3225, 0, 0, 0},
{ PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSB, { PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSB,
0x103c, 0x3223, 0, 0, 0}, 0x103c, 0x3223, 0, 0, 0},
{ PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSB, { PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSC,
0x103c, 0x3231, 0, 0, 0}, 0x103c, 0x3231, 0, 0, 0},
{ PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSC,
0x103c, 0x3233, 0, 0, 0},
{0,} {0,}
}; };
MODULE_DEVICE_TABLE(pci, cciss_pci_device_id); MODULE_DEVICE_TABLE(pci, cciss_pci_device_id);
...@@ -110,6 +112,7 @@ static struct board_type products[] = { ...@@ -110,6 +112,7 @@ static struct board_type products[] = {
{ 0x3225103C, "Smart Array P600", &SA5_access}, { 0x3225103C, "Smart Array P600", &SA5_access},
{ 0x3223103C, "Smart Array P800", &SA5_access}, { 0x3223103C, "Smart Array P800", &SA5_access},
{ 0x3231103C, "Smart Array E400", &SA5_access}, { 0x3231103C, "Smart Array E400", &SA5_access},
{ 0x3233103C, "Smart Array E300", &SA5_access},
}; };
/* How long to wait (in millesconds) for board to go into simple mode */ /* How long to wait (in millesconds) for board to go into simple mode */
...@@ -635,6 +638,7 @@ static int cciss_ioctl(struct inode *inode, struct file *filep, ...@@ -635,6 +638,7 @@ static int cciss_ioctl(struct inode *inode, struct file *filep,
cciss_pci_info_struct pciinfo; cciss_pci_info_struct pciinfo;
if (!arg) return -EINVAL; if (!arg) return -EINVAL;
pciinfo.domain = pci_domain_nr(host->pdev->bus);
pciinfo.bus = host->pdev->bus->number; pciinfo.bus = host->pdev->bus->number;
pciinfo.dev_fn = host->pdev->devfn; pciinfo.dev_fn = host->pdev->devfn;
pciinfo.board_id = host->board_id; pciinfo.board_id = host->board_id;
...@@ -787,13 +791,6 @@ static int cciss_ioctl(struct inode *inode, struct file *filep, ...@@ -787,13 +791,6 @@ static int cciss_ioctl(struct inode *inode, struct file *filep,
luninfo.LunID = drv->LunID; luninfo.LunID = drv->LunID;
luninfo.num_opens = drv->usage_count; luninfo.num_opens = drv->usage_count;
luninfo.num_parts = 0; luninfo.num_parts = 0;
/* count partitions 1 to 15 with sizes > 0 */
for (i = 0; i < MAX_PART - 1; i++) {
if (!disk->part[i])
continue;
if (disk->part[i]->nr_sects != 0)
luninfo.num_parts++;
}
if (copy_to_user(argp, &luninfo, if (copy_to_user(argp, &luninfo,
sizeof(LogvolInfo_struct))) sizeof(LogvolInfo_struct)))
return -EFAULT; return -EFAULT;
......
此差异已折叠。
...@@ -760,7 +760,8 @@ static void deadline_put_request(request_queue_t *q, struct request *rq) ...@@ -760,7 +760,8 @@ static void deadline_put_request(request_queue_t *q, struct request *rq)
} }
static int static int
deadline_set_request(request_queue_t *q, struct request *rq, int gfp_mask) deadline_set_request(request_queue_t *q, struct request *rq, struct bio *bio,
int gfp_mask)
{ {
struct deadline_data *dd = q->elevator->elevator_data; struct deadline_data *dd = q->elevator->elevator_data;
struct deadline_rq *drq; struct deadline_rq *drq;
......
...@@ -486,12 +486,13 @@ struct request *elv_former_request(request_queue_t *q, struct request *rq) ...@@ -486,12 +486,13 @@ struct request *elv_former_request(request_queue_t *q, struct request *rq)
return NULL; return NULL;
} }
int elv_set_request(request_queue_t *q, struct request *rq, int gfp_mask) int elv_set_request(request_queue_t *q, struct request *rq, struct bio *bio,
int gfp_mask)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
if (e->ops->elevator_set_req_fn) if (e->ops->elevator_set_req_fn)
return e->ops->elevator_set_req_fn(q, rq, gfp_mask); return e->ops->elevator_set_req_fn(q, rq, bio, gfp_mask);
rq->elevator_private = NULL; rq->elevator_private = NULL;
return 0; return 0;
...@@ -505,12 +506,12 @@ void elv_put_request(request_queue_t *q, struct request *rq) ...@@ -505,12 +506,12 @@ void elv_put_request(request_queue_t *q, struct request *rq)
e->ops->elevator_put_req_fn(q, rq); e->ops->elevator_put_req_fn(q, rq);
} }
int elv_may_queue(request_queue_t *q, int rw) int elv_may_queue(request_queue_t *q, int rw, struct bio *bio)
{ {
elevator_t *e = q->elevator; elevator_t *e = q->elevator;
if (e->ops->elevator_may_queue_fn) if (e->ops->elevator_may_queue_fn)
return e->ops->elevator_may_queue_fn(q, rw); return e->ops->elevator_may_queue_fn(q, rw, bio);
return ELV_MQUEUE_MAY; return ELV_MQUEUE_MAY;
} }
......
...@@ -276,6 +276,7 @@ static inline void rq_init(request_queue_t *q, struct request *rq) ...@@ -276,6 +276,7 @@ static inline void rq_init(request_queue_t *q, struct request *rq)
rq->errors = 0; rq->errors = 0;
rq->rq_status = RQ_ACTIVE; rq->rq_status = RQ_ACTIVE;
rq->bio = rq->biotail = NULL; rq->bio = rq->biotail = NULL;
rq->ioprio = 0;
rq->buffer = NULL; rq->buffer = NULL;
rq->ref_count = 1; rq->ref_count = 1;
rq->q = q; rq->q = q;
...@@ -1442,11 +1443,7 @@ void __generic_unplug_device(request_queue_t *q) ...@@ -1442,11 +1443,7 @@ void __generic_unplug_device(request_queue_t *q)
if (!blk_remove_plug(q)) if (!blk_remove_plug(q))
return; return;
/* q->request_fn(q);
* was plugged, fire request_fn if queue has stuff to do
*/
if (elv_next_request(q))
q->request_fn(q);
} }
EXPORT_SYMBOL(__generic_unplug_device); EXPORT_SYMBOL(__generic_unplug_device);
...@@ -1776,8 +1773,8 @@ static inline void blk_free_request(request_queue_t *q, struct request *rq) ...@@ -1776,8 +1773,8 @@ static inline void blk_free_request(request_queue_t *q, struct request *rq)
mempool_free(rq, q->rq.rq_pool); mempool_free(rq, q->rq.rq_pool);
} }
static inline struct request *blk_alloc_request(request_queue_t *q, int rw, static inline struct request *
int gfp_mask) blk_alloc_request(request_queue_t *q, int rw, struct bio *bio, int gfp_mask)
{ {
struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask); struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
...@@ -1790,7 +1787,7 @@ static inline struct request *blk_alloc_request(request_queue_t *q, int rw, ...@@ -1790,7 +1787,7 @@ static inline struct request *blk_alloc_request(request_queue_t *q, int rw,
*/ */
rq->flags = rw; rq->flags = rw;
if (!elv_set_request(q, rq, gfp_mask)) if (!elv_set_request(q, rq, bio, gfp_mask))
return rq; return rq;
mempool_free(rq, q->rq.rq_pool); mempool_free(rq, q->rq.rq_pool);
...@@ -1872,7 +1869,8 @@ static void freed_request(request_queue_t *q, int rw) ...@@ -1872,7 +1869,8 @@ static void freed_request(request_queue_t *q, int rw)
/* /*
* Get a free request, queue_lock must not be held * Get a free request, queue_lock must not be held
*/ */
static struct request *get_request(request_queue_t *q, int rw, int gfp_mask) static struct request *get_request(request_queue_t *q, int rw, struct bio *bio,
int gfp_mask)
{ {
struct request *rq = NULL; struct request *rq = NULL;
struct request_list *rl = &q->rq; struct request_list *rl = &q->rq;
...@@ -1895,7 +1893,7 @@ static struct request *get_request(request_queue_t *q, int rw, int gfp_mask) ...@@ -1895,7 +1893,7 @@ static struct request *get_request(request_queue_t *q, int rw, int gfp_mask)
} }
} }
switch (elv_may_queue(q, rw)) { switch (elv_may_queue(q, rw, bio)) {
case ELV_MQUEUE_NO: case ELV_MQUEUE_NO:
goto rq_starved; goto rq_starved;
case ELV_MQUEUE_MAY: case ELV_MQUEUE_MAY:
...@@ -1920,7 +1918,7 @@ static struct request *get_request(request_queue_t *q, int rw, int gfp_mask) ...@@ -1920,7 +1918,7 @@ static struct request *get_request(request_queue_t *q, int rw, int gfp_mask)
set_queue_congested(q, rw); set_queue_congested(q, rw);
spin_unlock_irq(q->queue_lock); spin_unlock_irq(q->queue_lock);
rq = blk_alloc_request(q, rw, gfp_mask); rq = blk_alloc_request(q, rw, bio, gfp_mask);
if (!rq) { if (!rq) {
/* /*
* Allocation failed presumably due to memory. Undo anything * Allocation failed presumably due to memory. Undo anything
...@@ -1961,7 +1959,8 @@ static struct request *get_request(request_queue_t *q, int rw, int gfp_mask) ...@@ -1961,7 +1959,8 @@ static struct request *get_request(request_queue_t *q, int rw, int gfp_mask)
* No available requests for this queue, unplug the device and wait for some * No available requests for this queue, unplug the device and wait for some
* requests to become available. * requests to become available.
*/ */
static struct request *get_request_wait(request_queue_t *q, int rw) static struct request *get_request_wait(request_queue_t *q, int rw,
struct bio *bio)
{ {
DEFINE_WAIT(wait); DEFINE_WAIT(wait);
struct request *rq; struct request *rq;
...@@ -1972,7 +1971,7 @@ static struct request *get_request_wait(request_queue_t *q, int rw) ...@@ -1972,7 +1971,7 @@ static struct request *get_request_wait(request_queue_t *q, int rw)
prepare_to_wait_exclusive(&rl->wait[rw], &wait, prepare_to_wait_exclusive(&rl->wait[rw], &wait,
TASK_UNINTERRUPTIBLE); TASK_UNINTERRUPTIBLE);
rq = get_request(q, rw, GFP_NOIO); rq = get_request(q, rw, bio, GFP_NOIO);
if (!rq) { if (!rq) {
struct io_context *ioc; struct io_context *ioc;
...@@ -2003,9 +2002,9 @@ struct request *blk_get_request(request_queue_t *q, int rw, int gfp_mask) ...@@ -2003,9 +2002,9 @@ struct request *blk_get_request(request_queue_t *q, int rw, int gfp_mask)
BUG_ON(rw != READ && rw != WRITE); BUG_ON(rw != READ && rw != WRITE);
if (gfp_mask & __GFP_WAIT) if (gfp_mask & __GFP_WAIT)
rq = get_request_wait(q, rw); rq = get_request_wait(q, rw, NULL);
else else
rq = get_request(q, rw, gfp_mask); rq = get_request(q, rw, NULL, gfp_mask);
return rq; return rq;
} }
...@@ -2333,7 +2332,6 @@ static void __blk_put_request(request_queue_t *q, struct request *req) ...@@ -2333,7 +2332,6 @@ static void __blk_put_request(request_queue_t *q, struct request *req)
return; return;
req->rq_status = RQ_INACTIVE; req->rq_status = RQ_INACTIVE;
req->q = NULL;
req->rl = NULL; req->rl = NULL;
/* /*
...@@ -2462,6 +2460,8 @@ static int attempt_merge(request_queue_t *q, struct request *req, ...@@ -2462,6 +2460,8 @@ static int attempt_merge(request_queue_t *q, struct request *req,
req->rq_disk->in_flight--; req->rq_disk->in_flight--;
} }
req->ioprio = ioprio_best(req->ioprio, next->ioprio);
__blk_put_request(q, next); __blk_put_request(q, next);
return 1; return 1;
} }
...@@ -2514,11 +2514,13 @@ static int __make_request(request_queue_t *q, struct bio *bio) ...@@ -2514,11 +2514,13 @@ static int __make_request(request_queue_t *q, struct bio *bio)
{ {
struct request *req, *freereq = NULL; struct request *req, *freereq = NULL;
int el_ret, rw, nr_sectors, cur_nr_sectors, barrier, err, sync; int el_ret, rw, nr_sectors, cur_nr_sectors, barrier, err, sync;
unsigned short prio;
sector_t sector; sector_t sector;
sector = bio->bi_sector; sector = bio->bi_sector;
nr_sectors = bio_sectors(bio); nr_sectors = bio_sectors(bio);
cur_nr_sectors = bio_cur_sectors(bio); cur_nr_sectors = bio_cur_sectors(bio);
prio = bio_prio(bio);
rw = bio_data_dir(bio); rw = bio_data_dir(bio);
sync = bio_sync(bio); sync = bio_sync(bio);
...@@ -2559,6 +2561,7 @@ static int __make_request(request_queue_t *q, struct bio *bio) ...@@ -2559,6 +2561,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
req->biotail->bi_next = bio; req->biotail->bi_next = bio;
req->biotail = bio; req->biotail = bio;
req->nr_sectors = req->hard_nr_sectors += nr_sectors; req->nr_sectors = req->hard_nr_sectors += nr_sectors;
req->ioprio = ioprio_best(req->ioprio, prio);
drive_stat_acct(req, nr_sectors, 0); drive_stat_acct(req, nr_sectors, 0);
if (!attempt_back_merge(q, req)) if (!attempt_back_merge(q, req))
elv_merged_request(q, req); elv_merged_request(q, req);
...@@ -2583,6 +2586,7 @@ static int __make_request(request_queue_t *q, struct bio *bio) ...@@ -2583,6 +2586,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
req->hard_cur_sectors = cur_nr_sectors; req->hard_cur_sectors = cur_nr_sectors;
req->sector = req->hard_sector = sector; req->sector = req->hard_sector = sector;
req->nr_sectors = req->hard_nr_sectors += nr_sectors; req->nr_sectors = req->hard_nr_sectors += nr_sectors;
req->ioprio = ioprio_best(req->ioprio, prio);
drive_stat_acct(req, nr_sectors, 0); drive_stat_acct(req, nr_sectors, 0);
if (!attempt_front_merge(q, req)) if (!attempt_front_merge(q, req))
elv_merged_request(q, req); elv_merged_request(q, req);
...@@ -2610,7 +2614,7 @@ static int __make_request(request_queue_t *q, struct bio *bio) ...@@ -2610,7 +2614,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
freereq = NULL; freereq = NULL;
} else { } else {
spin_unlock_irq(q->queue_lock); spin_unlock_irq(q->queue_lock);
if ((freereq = get_request(q, rw, GFP_ATOMIC)) == NULL) { if ((freereq = get_request(q, rw, bio, GFP_ATOMIC)) == NULL) {
/* /*
* READA bit set * READA bit set
*/ */
...@@ -2618,7 +2622,7 @@ static int __make_request(request_queue_t *q, struct bio *bio) ...@@ -2618,7 +2622,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
if (bio_rw_ahead(bio)) if (bio_rw_ahead(bio))
goto end_io; goto end_io;
freereq = get_request_wait(q, rw); freereq = get_request_wait(q, rw, bio);
} }
goto again; goto again;
} }
...@@ -2646,6 +2650,7 @@ static int __make_request(request_queue_t *q, struct bio *bio) ...@@ -2646,6 +2650,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
req->buffer = bio_data(bio); /* see ->buffer comment above */ req->buffer = bio_data(bio); /* see ->buffer comment above */
req->waiting = NULL; req->waiting = NULL;
req->bio = req->biotail = bio; req->bio = req->biotail = bio;
req->ioprio = prio;
req->rq_disk = bio->bi_bdev->bd_disk; req->rq_disk = bio->bi_bdev->bd_disk;
req->start_time = jiffies; req->start_time = jiffies;
...@@ -2674,7 +2679,7 @@ static inline void blk_partition_remap(struct bio *bio) ...@@ -2674,7 +2679,7 @@ static inline void blk_partition_remap(struct bio *bio)
if (bdev != bdev->bd_contains) { if (bdev != bdev->bd_contains) {
struct hd_struct *p = bdev->bd_part; struct hd_struct *p = bdev->bd_part;
switch (bio->bi_rw) { switch (bio_data_dir(bio)) {
case READ: case READ:
p->read_sectors += bio_sectors(bio); p->read_sectors += bio_sectors(bio);
p->reads++; p->reads++;
...@@ -2693,6 +2698,7 @@ void blk_finish_queue_drain(request_queue_t *q) ...@@ -2693,6 +2698,7 @@ void blk_finish_queue_drain(request_queue_t *q)
{ {
struct request_list *rl = &q->rq; struct request_list *rl = &q->rq;
struct request *rq; struct request *rq;
int requeued = 0;
spin_lock_irq(q->queue_lock); spin_lock_irq(q->queue_lock);
clear_bit(QUEUE_FLAG_DRAIN, &q->queue_flags); clear_bit(QUEUE_FLAG_DRAIN, &q->queue_flags);
...@@ -2701,9 +2707,13 @@ void blk_finish_queue_drain(request_queue_t *q) ...@@ -2701,9 +2707,13 @@ void blk_finish_queue_drain(request_queue_t *q)
rq = list_entry_rq(q->drain_list.next); rq = list_entry_rq(q->drain_list.next);
list_del_init(&rq->queuelist); list_del_init(&rq->queuelist);
__elv_add_request(q, rq, ELEVATOR_INSERT_BACK, 1); elv_requeue_request(q, rq);
requeued++;
} }
if (requeued)
q->request_fn(q);
spin_unlock_irq(q->queue_lock); spin_unlock_irq(q->queue_lock);
wake_up(&rl->wait[0]); wake_up(&rl->wait[0]);
...@@ -2900,7 +2910,7 @@ void submit_bio(int rw, struct bio *bio) ...@@ -2900,7 +2910,7 @@ void submit_bio(int rw, struct bio *bio)
BIO_BUG_ON(!bio->bi_size); BIO_BUG_ON(!bio->bi_size);
BIO_BUG_ON(!bio->bi_io_vec); BIO_BUG_ON(!bio->bi_io_vec);
bio->bi_rw = rw; bio->bi_rw |= rw;
if (rw & WRITE) if (rw & WRITE)
mod_page_state(pgpgout, count); mod_page_state(pgpgout, count);
else else
...@@ -3257,8 +3267,11 @@ void exit_io_context(void) ...@@ -3257,8 +3267,11 @@ void exit_io_context(void)
struct io_context *ioc; struct io_context *ioc;
local_irq_save(flags); local_irq_save(flags);
task_lock(current);
ioc = current->io_context; ioc = current->io_context;
current->io_context = NULL; current->io_context = NULL;
ioc->task = NULL;
task_unlock(current);
local_irq_restore(flags); local_irq_restore(flags);
if (ioc->aic && ioc->aic->exit) if (ioc->aic && ioc->aic->exit)
...@@ -3293,12 +3306,12 @@ struct io_context *get_io_context(int gfp_flags) ...@@ -3293,12 +3306,12 @@ struct io_context *get_io_context(int gfp_flags)
ret = kmem_cache_alloc(iocontext_cachep, gfp_flags); ret = kmem_cache_alloc(iocontext_cachep, gfp_flags);
if (ret) { if (ret) {
atomic_set(&ret->refcount, 1); atomic_set(&ret->refcount, 1);
ret->pid = tsk->pid; ret->task = current;
ret->set_ioprio = NULL;
ret->last_waited = jiffies; /* doesn't matter... */ ret->last_waited = jiffies; /* doesn't matter... */
ret->nr_batch_requests = 0; /* because this is 0 */ ret->nr_batch_requests = 0; /* because this is 0 */
ret->aic = NULL; ret->aic = NULL;
ret->cic = NULL; ret->cic = NULL;
spin_lock_init(&ret->lock);
local_irq_save(flags); local_irq_save(flags);
......
...@@ -253,7 +253,7 @@ static int floppy_revalidate(struct gendisk *disk); ...@@ -253,7 +253,7 @@ static int floppy_revalidate(struct gendisk *disk);
static int swim3_add_device(struct device_node *swims); static int swim3_add_device(struct device_node *swims);
int swim3_init(void); int swim3_init(void);
#ifndef CONFIG_PMAC_PBOOK #ifndef CONFIG_PMAC_MEDIABAY
#define check_media_bay(which, what) 1 #define check_media_bay(which, what) 1
#endif #endif
...@@ -297,9 +297,11 @@ static void do_fd_request(request_queue_t * q) ...@@ -297,9 +297,11 @@ static void do_fd_request(request_queue_t * q)
int i; int i;
for(i=0;i<floppy_count;i++) for(i=0;i<floppy_count;i++)
{ {
#ifdef CONFIG_PMAC_MEDIABAY
if (floppy_states[i].media_bay && if (floppy_states[i].media_bay &&
check_media_bay(floppy_states[i].media_bay, MB_FD)) check_media_bay(floppy_states[i].media_bay, MB_FD))
continue; continue;
#endif /* CONFIG_PMAC_MEDIABAY */
start_request(&floppy_states[i]); start_request(&floppy_states[i]);
} }
sti(); sti();
...@@ -856,8 +858,10 @@ static int floppy_ioctl(struct inode *inode, struct file *filp, ...@@ -856,8 +858,10 @@ static int floppy_ioctl(struct inode *inode, struct file *filp,
if ((cmd & 0x80) && !capable(CAP_SYS_ADMIN)) if ((cmd & 0x80) && !capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
#ifdef CONFIG_PMAC_MEDIABAY
if (fs->media_bay && check_media_bay(fs->media_bay, MB_FD)) if (fs->media_bay && check_media_bay(fs->media_bay, MB_FD))
return -ENXIO; return -ENXIO;
#endif
switch (cmd) { switch (cmd) {
case FDEJECT: case FDEJECT:
...@@ -881,8 +885,10 @@ static int floppy_open(struct inode *inode, struct file *filp) ...@@ -881,8 +885,10 @@ static int floppy_open(struct inode *inode, struct file *filp)
int n, err = 0; int n, err = 0;
if (fs->ref_count == 0) { if (fs->ref_count == 0) {
#ifdef CONFIG_PMAC_MEDIABAY
if (fs->media_bay && check_media_bay(fs->media_bay, MB_FD)) if (fs->media_bay && check_media_bay(fs->media_bay, MB_FD))
return -ENXIO; return -ENXIO;
#endif
out_8(&sw->setup, S_IBM_DRIVE | S_FCLK_DIV2); out_8(&sw->setup, S_IBM_DRIVE | S_FCLK_DIV2);
out_8(&sw->control_bic, 0xff); out_8(&sw->control_bic, 0xff);
out_8(&sw->mode, 0x95); out_8(&sw->mode, 0x95);
...@@ -967,8 +973,10 @@ static int floppy_revalidate(struct gendisk *disk) ...@@ -967,8 +973,10 @@ static int floppy_revalidate(struct gendisk *disk)
struct swim3 __iomem *sw; struct swim3 __iomem *sw;
int ret, n; int ret, n;
#ifdef CONFIG_PMAC_MEDIABAY
if (fs->media_bay && check_media_bay(fs->media_bay, MB_FD)) if (fs->media_bay && check_media_bay(fs->media_bay, MB_FD))
return -ENXIO; return -ENXIO;
#endif
sw = fs->swim3; sw = fs->swim3;
grab_drive(fs, revalidating, 0); grab_drive(fs, revalidating, 0);
......
...@@ -26,6 +26,7 @@ ...@@ -26,6 +26,7 @@
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/time.h> #include <linux/time.h>
#include <linux/hdreg.h> #include <linux/hdreg.h>
#include <linux/dma-mapping.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/semaphore.h> #include <asm/semaphore.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
...@@ -1582,9 +1583,9 @@ static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) ...@@ -1582,9 +1583,9 @@ static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_out; goto err_out;
#if IF_64BIT_DMA_IS_POSSIBLE /* grrrr... */ #if IF_64BIT_DMA_IS_POSSIBLE /* grrrr... */
rc = pci_set_dma_mask(pdev, 0xffffffffffffffffULL); rc = pci_set_dma_mask(pdev, DMA_64BIT_MASK);
if (!rc) { if (!rc) {
rc = pci_set_consistent_dma_mask(pdev, 0xffffffffffffffffULL); rc = pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK);
if (rc) { if (rc) {
printk(KERN_ERR DRV_NAME "(%s): consistent DMA mask failure\n", printk(KERN_ERR DRV_NAME "(%s): consistent DMA mask failure\n",
pci_name(pdev)); pci_name(pdev));
...@@ -1593,7 +1594,7 @@ static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) ...@@ -1593,7 +1594,7 @@ static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
pci_dac = 1; pci_dac = 1;
} else { } else {
#endif #endif
rc = pci_set_dma_mask(pdev, 0xffffffffULL); rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
if (rc) { if (rc) {
printk(KERN_ERR DRV_NAME "(%s): DMA mask failure\n", printk(KERN_ERR DRV_NAME "(%s): DMA mask failure\n",
pci_name(pdev)); pci_name(pdev));
......
...@@ -1089,6 +1089,14 @@ static int bluecard_event(event_t event, int priority, event_callback_args_t *ar ...@@ -1089,6 +1089,14 @@ static int bluecard_event(event_t event, int priority, event_callback_args_t *ar
return 0; return 0;
} }
static struct pcmcia_device_id bluecard_ids[] = {
PCMCIA_DEVICE_PROD_ID12("BlueCard", "LSE041", 0xbaf16fbf, 0x657cc15e),
PCMCIA_DEVICE_PROD_ID12("BTCFCARD", "LSE139", 0xe3987764, 0x2524b59c),
PCMCIA_DEVICE_PROD_ID12("WSS", "LSE039", 0x0a0736ec, 0x24e6dfab),
PCMCIA_DEVICE_NULL
};
MODULE_DEVICE_TABLE(pcmcia, bluecard_ids);
static struct pcmcia_driver bluecard_driver = { static struct pcmcia_driver bluecard_driver = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.drv = { .drv = {
...@@ -1096,6 +1104,7 @@ static struct pcmcia_driver bluecard_driver = { ...@@ -1096,6 +1104,7 @@ static struct pcmcia_driver bluecard_driver = {
}, },
.attach = bluecard_attach, .attach = bluecard_attach,
.detach = bluecard_detach, .detach = bluecard_detach,
.id_table = bluecard_ids,
}; };
static int __init init_bluecard_cs(void) static int __init init_bluecard_cs(void)
......
...@@ -935,6 +935,12 @@ static int bt3c_event(event_t event, int priority, event_callback_args_t *args) ...@@ -935,6 +935,12 @@ static int bt3c_event(event_t event, int priority, event_callback_args_t *args)
return 0; return 0;
} }
static struct pcmcia_device_id bt3c_ids[] = {
PCMCIA_DEVICE_PROD_ID13("3COM", "Bluetooth PC Card", 0xefce0a31, 0xd4ce9b02),
PCMCIA_DEVICE_NULL
};
MODULE_DEVICE_TABLE(pcmcia, bt3c_ids);
static struct pcmcia_driver bt3c_driver = { static struct pcmcia_driver bt3c_driver = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.drv = { .drv = {
...@@ -942,6 +948,7 @@ static struct pcmcia_driver bt3c_driver = { ...@@ -942,6 +948,7 @@ static struct pcmcia_driver bt3c_driver = {
}, },
.attach = bt3c_attach, .attach = bt3c_attach,
.detach = bt3c_detach, .detach = bt3c_detach,
.id_table = bt3c_ids,
}; };
static int __init init_bt3c_cs(void) static int __init init_bt3c_cs(void)
......
...@@ -855,6 +855,12 @@ static int btuart_event(event_t event, int priority, event_callback_args_t *args ...@@ -855,6 +855,12 @@ static int btuart_event(event_t event, int priority, event_callback_args_t *args
return 0; return 0;
} }
static struct pcmcia_device_id btuart_ids[] = {
/* don't use this driver. Use serial_cs + hci_uart instead */
PCMCIA_DEVICE_NULL
};
MODULE_DEVICE_TABLE(pcmcia, btuart_ids);
static struct pcmcia_driver btuart_driver = { static struct pcmcia_driver btuart_driver = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.drv = { .drv = {
...@@ -862,6 +868,7 @@ static struct pcmcia_driver btuart_driver = { ...@@ -862,6 +868,7 @@ static struct pcmcia_driver btuart_driver = {
}, },
.attach = btuart_attach, .attach = btuart_attach,
.detach = btuart_detach, .detach = btuart_detach,
.id_table = btuart_ids,
}; };
static int __init init_btuart_cs(void) static int __init init_btuart_cs(void)
......
...@@ -807,6 +807,13 @@ static int dtl1_event(event_t event, int priority, event_callback_args_t *args) ...@@ -807,6 +807,13 @@ static int dtl1_event(event_t event, int priority, event_callback_args_t *args)
return 0; return 0;
} }
static struct pcmcia_device_id dtl1_ids[] = {
PCMCIA_DEVICE_PROD_ID12("Nokia Mobile Phones", "DTL-1", 0xe1bfdd64, 0xe168480d),
PCMCIA_DEVICE_PROD_ID12("Socket", "CF", 0xb38bcc2e, 0x44ebf863),
PCMCIA_DEVICE_NULL
};
MODULE_DEVICE_TABLE(pcmcia, dtl1_ids);
static struct pcmcia_driver dtl1_driver = { static struct pcmcia_driver dtl1_driver = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.drv = { .drv = {
...@@ -814,6 +821,7 @@ static struct pcmcia_driver dtl1_driver = { ...@@ -814,6 +821,7 @@ static struct pcmcia_driver dtl1_driver = {
}, },
.attach = dtl1_attach, .attach = dtl1_attach,
.detach = dtl1_detach, .detach = dtl1_detach,
.id_table = dtl1_ids,
}; };
static int __init init_dtl1_cs(void) static int __init init_dtl1_cs(void)
......
...@@ -308,9 +308,6 @@ static int __init misc_init(void) ...@@ -308,9 +308,6 @@ static int __init misc_init(void)
#endif #endif
#ifdef CONFIG_BVME6000 #ifdef CONFIG_BVME6000
rtc_DP8570A_init(); rtc_DP8570A_init();
#endif
#ifdef CONFIG_PMAC_PBOOK
pmu_device_init();
#endif #endif
if (register_chrdev(MISC_MAJOR,"misc",&misc_fops)) { if (register_chrdev(MISC_MAJOR,"misc",&misc_fops)) {
printk("unable to get major %d for misc devices\n", printk("unable to get major %d for misc devices\n",
......
...@@ -581,7 +581,7 @@ static dev_link_t *mgslpc_attach(void) ...@@ -581,7 +581,7 @@ static dev_link_t *mgslpc_attach(void)
/* Interrupt setup */ /* Interrupt setup */
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE; link->irq.Attributes = IRQ_TYPE_EXCLUSIVE;
link->irq.IRQInfo1 = IRQ_INFO2_VALID | IRQ_LEVEL_ID; link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Handler = NULL; link->irq.Handler = NULL;
link->conf.Attributes = 0; link->conf.Attributes = 0;
...@@ -3081,6 +3081,12 @@ void mgslpc_remove_device(MGSLPC_INFO *remove_info) ...@@ -3081,6 +3081,12 @@ void mgslpc_remove_device(MGSLPC_INFO *remove_info)
} }
} }
static struct pcmcia_device_id mgslpc_ids[] = {
PCMCIA_DEVICE_MANF_CARD(0x02c5, 0x0050),
PCMCIA_DEVICE_NULL
};
MODULE_DEVICE_TABLE(pcmcia, mgslpc_ids);
static struct pcmcia_driver mgslpc_driver = { static struct pcmcia_driver mgslpc_driver = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.drv = { .drv = {
...@@ -3088,6 +3094,7 @@ static struct pcmcia_driver mgslpc_driver = { ...@@ -3088,6 +3094,7 @@ static struct pcmcia_driver mgslpc_driver = {
}, },
.attach = mgslpc_attach, .attach = mgslpc_attach,
.detach = mgslpc_detach, .detach = mgslpc_detach,
.id_table = mgslpc_ids,
}; };
static struct tty_operations mgslpc_ops = { static struct tty_operations mgslpc_ops = {
......
...@@ -606,6 +606,12 @@ config BLK_DEV_IT8172 ...@@ -606,6 +606,12 @@ config BLK_DEV_IT8172
<http://www.ite.com.tw/ia/brief_it8172bsp.htm>; picture of the <http://www.ite.com.tw/ia/brief_it8172bsp.htm>; picture of the
board at <http://www.mvista.com/partners/semiconductor/ite.html>. board at <http://www.mvista.com/partners/semiconductor/ite.html>.
config BLK_DEV_IT821X
tristate "IT821X IDE support"
help
This driver adds support for the ITE 8211 IDE controller and the
IT 8212 IDE RAID controller in both RAID and pass-through mode.
config BLK_DEV_NS87415 config BLK_DEV_NS87415
tristate "NS87415 chipset support" tristate "NS87415 chipset support"
help help
......
...@@ -119,6 +119,10 @@ static int lba_capacity_is_ok (struct hd_driveid *id) ...@@ -119,6 +119,10 @@ static int lba_capacity_is_ok (struct hd_driveid *id)
{ {
unsigned long lba_sects, chs_sects, head, tail; unsigned long lba_sects, chs_sects, head, tail;
/* No non-LBA info .. so valid! */
if (id->cyls == 0)
return 1;
/* /*
* The ATA spec tells large drives to return * The ATA spec tells large drives to return
* C/H/S = 16383/16/63 independent of their size. * C/H/S = 16383/16/63 independent of their size.
......
...@@ -132,7 +132,6 @@ static const struct drive_list_entry drive_blacklist [] = { ...@@ -132,7 +132,6 @@ static const struct drive_list_entry drive_blacklist [] = {
{ "SAMSUNG CD-ROM SC-148C", "ALL" }, { "SAMSUNG CD-ROM SC-148C", "ALL" },
{ "SAMSUNG CD-ROM SC", "ALL" }, { "SAMSUNG CD-ROM SC", "ALL" },
{ "SanDisk SDP3B-64" , "ALL" }, { "SanDisk SDP3B-64" , "ALL" },
{ "SAMSUNG CD-ROM SN-124", "ALL" },
{ "ATAPI CD-ROM DRIVE 40X MAXIMUM", "ALL" }, { "ATAPI CD-ROM DRIVE 40X MAXIMUM", "ALL" },
{ "_NEC DV5800A", "ALL" }, { "_NEC DV5800A", "ALL" },
{ NULL , NULL } { NULL , NULL }
......
...@@ -1181,7 +1181,8 @@ static ide_startstop_t do_reset1 (ide_drive_t *drive, int do_not_try_atapi) ...@@ -1181,7 +1181,8 @@ static ide_startstop_t do_reset1 (ide_drive_t *drive, int do_not_try_atapi)
pre_reset(drive); pre_reset(drive);
SELECT_DRIVE(drive); SELECT_DRIVE(drive);
udelay (20); udelay (20);
hwif->OUTB(WIN_SRST, IDE_COMMAND_REG); hwif->OUTBSYNC(drive, WIN_SRST, IDE_COMMAND_REG);
ndelay(400);
hwgroup->poll_timeout = jiffies + WAIT_WORSTCASE; hwgroup->poll_timeout = jiffies + WAIT_WORSTCASE;
hwgroup->polling = 1; hwgroup->polling = 1;
__ide_set_handler(drive, &atapi_reset_pollfunc, HZ/20, NULL); __ide_set_handler(drive, &atapi_reset_pollfunc, HZ/20, NULL);
......
...@@ -457,6 +457,40 @@ int ide_event(event_t event, int priority, ...@@ -457,6 +457,40 @@ int ide_event(event_t event, int priority,
return 0; return 0;
} /* ide_event */ } /* ide_event */
static struct pcmcia_device_id ide_ids[] = {
PCMCIA_DEVICE_FUNC_ID(4),
PCMCIA_DEVICE_MANF_CARD(0x0032, 0x0704),
PCMCIA_DEVICE_MANF_CARD(0x00a4, 0x002d),
PCMCIA_DEVICE_MANF_CARD(0x2080, 0x0001),
PCMCIA_DEVICE_MANF_CARD(0x0045, 0x0401),
PCMCIA_DEVICE_PROD_ID123("Caravelle", "PSC-IDE ", "PSC000", 0x8c36137c, 0xd0693ab8, 0x2768a9f0),
PCMCIA_DEVICE_PROD_ID123("CDROM", "IDE", "MCD-601p", 0x1b9179ca, 0xede88951, 0x0d902f74),
PCMCIA_DEVICE_PROD_ID123("PCMCIA", "IDE CARD", "F1", 0x281f1c5d, 0x1907960c, 0xf7fde8b9),
PCMCIA_DEVICE_PROD_ID12("ARGOSY", "CD-ROM", 0x78f308dc, 0x66536591),
PCMCIA_DEVICE_PROD_ID12("ARGOSY", "PnPIDE", 0x78f308dc, 0x0c694728),
PCMCIA_DEVICE_PROD_ID12("CNF CD-M", "CD-ROM", 0x7d93b852, 0x66536591),
PCMCIA_DEVICE_PROD_ID12("Creative Technology Ltd.", "PCMCIA CD-ROM Interface Card", 0xff8c8a45, 0xfe8020c4),
PCMCIA_DEVICE_PROD_ID12("Digital Equipment Corporation.", "Digital Mobile Media CD-ROM", 0x17692a66, 0xef1dcbde),
PCMCIA_DEVICE_PROD_ID12("EXP", "CD", 0x6f58c983, 0xaae5994f),
PCMCIA_DEVICE_PROD_ID12("EXP ", "CD-ROM", 0x0a5c52fd, 0x66536591),
PCMCIA_DEVICE_PROD_ID12("EXP ", "PnPIDE", 0x0a5c52fd, 0x0c694728),
PCMCIA_DEVICE_PROD_ID12("FREECOM", "PCCARD-IDE", 0x5714cbf7, 0x48e0ab8e),
PCMCIA_DEVICE_PROD_ID12("IBM", "IBM17JSSFP20", 0xb569a6e5, 0xf2508753),
PCMCIA_DEVICE_PROD_ID12("IO DATA", "CBIDE2 ", 0x547e66dc, 0x8671043b),
PCMCIA_DEVICE_PROD_ID12("IO DATA", "PCIDE", 0x547e66dc, 0x5c5ab149),
PCMCIA_DEVICE_PROD_ID12("IO DATA", "PCIDEII", 0x547e66dc, 0xb3662674),
PCMCIA_DEVICE_PROD_ID12("LOOKMEET", "CBIDE2 ", 0xe37be2b5, 0x8671043b),
PCMCIA_DEVICE_PROD_ID12(" ", "NinjaATA-", 0x3b6e20c8, 0xebe0bd79),
PCMCIA_DEVICE_PROD_ID12("PCMCIA", "CD-ROM", 0x281f1c5d, 0x66536591),
PCMCIA_DEVICE_PROD_ID12("PCMCIA", "PnPIDE", 0x281f1c5d, 0x0c694728),
PCMCIA_DEVICE_PROD_ID12("SHUTTLE TECHNOLOGY LTD.", "PCCARD-IDE/ATAPI Adapter", 0x4a3f0ba0, 0x322560e1),
PCMCIA_DEVICE_PROD_ID12("TOSHIBA", "MK2001MPL", 0xb4585a1a, 0x3489e003),
PCMCIA_DEVICE_PROD_ID12("WIT", "IDE16", 0x244e5994, 0x3e232852),
PCMCIA_DEVICE_PROD_ID1("STI Flash", 0xe4a13209),
PCMCIA_DEVICE_NULL,
};
MODULE_DEVICE_TABLE(pcmcia, ide_ids);
static struct pcmcia_driver ide_cs_driver = { static struct pcmcia_driver ide_cs_driver = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.drv = { .drv = {
...@@ -464,6 +498,7 @@ static struct pcmcia_driver ide_cs_driver = { ...@@ -464,6 +498,7 @@ static struct pcmcia_driver ide_cs_driver = {
}, },
.attach = ide_attach, .attach = ide_attach,
.detach = ide_detach, .detach = ide_detach,
.id_table = ide_ids,
}; };
static int __init init_ide_cs(void) static int __init init_ide_cs(void)
......
...@@ -12,6 +12,7 @@ obj-$(CONFIG_BLK_DEV_HPT34X) += hpt34x.o ...@@ -12,6 +12,7 @@ obj-$(CONFIG_BLK_DEV_HPT34X) += hpt34x.o
obj-$(CONFIG_BLK_DEV_HPT366) += hpt366.o obj-$(CONFIG_BLK_DEV_HPT366) += hpt366.o
#obj-$(CONFIG_BLK_DEV_HPT37X) += hpt37x.o #obj-$(CONFIG_BLK_DEV_HPT37X) += hpt37x.o
obj-$(CONFIG_BLK_DEV_IT8172) += it8172.o obj-$(CONFIG_BLK_DEV_IT8172) += it8172.o
obj-$(CONFIG_BLK_DEV_IT821X) += it821x.o
obj-$(CONFIG_BLK_DEV_NS87415) += ns87415.o obj-$(CONFIG_BLK_DEV_NS87415) += ns87415.o
obj-$(CONFIG_BLK_DEV_OPTI621) += opti621.o obj-$(CONFIG_BLK_DEV_OPTI621) += opti621.o
obj-$(CONFIG_BLK_DEV_PDC202XX_OLD) += pdc202xx_old.o obj-$(CONFIG_BLK_DEV_PDC202XX_OLD) += pdc202xx_old.o
......
...@@ -39,6 +39,17 @@ ...@@ -39,6 +39,17 @@
#include <asm/io.h> #include <asm/io.h>
static int ide_generic_all; /* Set to claim all devices */
static int __init ide_generic_all_on(char *unused)
{
ide_generic_all = 1;
printk(KERN_INFO "IDE generic will claim all unknown PCI IDE storage controllers.\n");
return 1;
}
__setup("all-generic-ide", ide_generic_all_on);
static void __devinit init_hwif_generic (ide_hwif_t *hwif) static void __devinit init_hwif_generic (ide_hwif_t *hwif)
{ {
switch(hwif->pci_dev->device) { switch(hwif->pci_dev->device) {
...@@ -78,79 +89,85 @@ static void __devinit init_hwif_generic (ide_hwif_t *hwif) ...@@ -78,79 +89,85 @@ static void __devinit init_hwif_generic (ide_hwif_t *hwif)
static ide_pci_device_t generic_chipsets[] __devinitdata = { static ide_pci_device_t generic_chipsets[] __devinitdata = {
{ /* 0 */ { /* 0 */
.name = "Unknown",
.init_hwif = init_hwif_generic,
.channels = 2,
.autodma = AUTODMA,
.bootable = ON_BOARD,
},{ /* 1 */
.name = "NS87410", .name = "NS87410",
.init_hwif = init_hwif_generic, .init_hwif = init_hwif_generic,
.channels = 2, .channels = 2,
.autodma = AUTODMA, .autodma = AUTODMA,
.enablebits = {{0x43,0x08,0x08}, {0x47,0x08,0x08}}, .enablebits = {{0x43,0x08,0x08}, {0x47,0x08,0x08}},
.bootable = ON_BOARD, .bootable = ON_BOARD,
},{ /* 1 */ },{ /* 2 */
.name = "SAMURAI", .name = "SAMURAI",
.init_hwif = init_hwif_generic, .init_hwif = init_hwif_generic,
.channels = 2, .channels = 2,
.autodma = AUTODMA, .autodma = AUTODMA,
.bootable = ON_BOARD, .bootable = ON_BOARD,
},{ /* 2 */ },{ /* 3 */
.name = "HT6565", .name = "HT6565",
.init_hwif = init_hwif_generic, .init_hwif = init_hwif_generic,
.channels = 2, .channels = 2,
.autodma = AUTODMA, .autodma = AUTODMA,
.bootable = ON_BOARD, .bootable = ON_BOARD,
},{ /* 3 */ },{ /* 4 */
.name = "UM8673F", .name = "UM8673F",
.init_hwif = init_hwif_generic, .init_hwif = init_hwif_generic,
.channels = 2, .channels = 2,
.autodma = NODMA, .autodma = NODMA,
.bootable = ON_BOARD, .bootable = ON_BOARD,
},{ /* 4 */ },{ /* 5 */
.name = "UM8886A", .name = "UM8886A",
.init_hwif = init_hwif_generic, .init_hwif = init_hwif_generic,
.channels = 2, .channels = 2,
.autodma = NODMA, .autodma = NODMA,
.bootable = ON_BOARD, .bootable = ON_BOARD,
},{ /* 5 */ },{ /* 6 */
.name = "UM8886BF", .name = "UM8886BF",
.init_hwif = init_hwif_generic, .init_hwif = init_hwif_generic,
.channels = 2, .channels = 2,
.autodma = NODMA, .autodma = NODMA,
.bootable = ON_BOARD, .bootable = ON_BOARD,
},{ /* 6 */ },{ /* 7 */
.name = "HINT_IDE", .name = "HINT_IDE",
.init_hwif = init_hwif_generic, .init_hwif = init_hwif_generic,
.channels = 2, .channels = 2,
.autodma = AUTODMA, .autodma = AUTODMA,
.bootable = ON_BOARD, .bootable = ON_BOARD,
},{ /* 7 */ },{ /* 8 */
.name = "VIA_IDE", .name = "VIA_IDE",
.init_hwif = init_hwif_generic, .init_hwif = init_hwif_generic,
.channels = 2, .channels = 2,
.autodma = NOAUTODMA, .autodma = NOAUTODMA,
.bootable = ON_BOARD, .bootable = ON_BOARD,
},{ /* 8 */ },{ /* 9 */
.name = "OPTI621V", .name = "OPTI621V",
.init_hwif = init_hwif_generic, .init_hwif = init_hwif_generic,
.channels = 2, .channels = 2,
.autodma = NOAUTODMA, .autodma = NOAUTODMA,
.bootable = ON_BOARD, .bootable = ON_BOARD,
},{ /* 9 */ },{ /* 10 */
.name = "VIA8237SATA", .name = "VIA8237SATA",
.init_hwif = init_hwif_generic, .init_hwif = init_hwif_generic,
.channels = 2, .channels = 2,
.autodma = AUTODMA, .autodma = AUTODMA,
.bootable = OFF_BOARD, .bootable = OFF_BOARD,
},{ /* 10 */ },{ /* 11 */
.name = "Piccolo0102", .name = "Piccolo0102",
.init_hwif = init_hwif_generic, .init_hwif = init_hwif_generic,
.channels = 2, .channels = 2,
.autodma = NOAUTODMA, .autodma = NOAUTODMA,
.bootable = ON_BOARD, .bootable = ON_BOARD,
},{ /* 11 */ },{ /* 12 */
.name = "Piccolo0103", .name = "Piccolo0103",
.init_hwif = init_hwif_generic, .init_hwif = init_hwif_generic,
.channels = 2, .channels = 2,
.autodma = NOAUTODMA, .autodma = NOAUTODMA,
.bootable = ON_BOARD, .bootable = ON_BOARD,
},{ /* 12 */ },{ /* 13 */
.name = "Piccolo0105", .name = "Piccolo0105",
.init_hwif = init_hwif_generic, .init_hwif = init_hwif_generic,
.channels = 2, .channels = 2,
...@@ -174,6 +191,10 @@ static int __devinit generic_init_one(struct pci_dev *dev, const struct pci_devi ...@@ -174,6 +191,10 @@ static int __devinit generic_init_one(struct pci_dev *dev, const struct pci_devi
u16 command; u16 command;
int ret = -ENODEV; int ret = -ENODEV;
/* Don't use the generic entry unless instructed to do so */
if (id->driver_data == 0 && ide_generic_all == 0)
goto out;
if (dev->vendor == PCI_VENDOR_ID_UMC && if (dev->vendor == PCI_VENDOR_ID_UMC &&
dev->device == PCI_DEVICE_ID_UMC_UM8886A && dev->device == PCI_DEVICE_ID_UMC_UM8886A &&
(!(PCI_FUNC(dev->devfn) & 1))) (!(PCI_FUNC(dev->devfn) & 1)))
...@@ -195,21 +216,23 @@ static int __devinit generic_init_one(struct pci_dev *dev, const struct pci_devi ...@@ -195,21 +216,23 @@ static int __devinit generic_init_one(struct pci_dev *dev, const struct pci_devi
} }
static struct pci_device_id generic_pci_tbl[] = { static struct pci_device_id generic_pci_tbl[] = {
{ PCI_VENDOR_ID_NS, PCI_DEVICE_ID_NS_87410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, { PCI_VENDOR_ID_NS, PCI_DEVICE_ID_NS_87410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1},
{ PCI_VENDOR_ID_PCTECH, PCI_DEVICE_ID_PCTECH_SAMURAI_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1}, { PCI_VENDOR_ID_PCTECH, PCI_DEVICE_ID_PCTECH_SAMURAI_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 2},
{ PCI_VENDOR_ID_HOLTEK, PCI_DEVICE_ID_HOLTEK_6565, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 2}, { PCI_VENDOR_ID_HOLTEK, PCI_DEVICE_ID_HOLTEK_6565, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 3},
{ PCI_VENDOR_ID_UMC, PCI_DEVICE_ID_UMC_UM8673F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 3}, { PCI_VENDOR_ID_UMC, PCI_DEVICE_ID_UMC_UM8673F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 4},
{ PCI_VENDOR_ID_UMC, PCI_DEVICE_ID_UMC_UM8886A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 4}, { PCI_VENDOR_ID_UMC, PCI_DEVICE_ID_UMC_UM8886A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 5},
{ PCI_VENDOR_ID_UMC, PCI_DEVICE_ID_UMC_UM8886BF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 5}, { PCI_VENDOR_ID_UMC, PCI_DEVICE_ID_UMC_UM8886BF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 6},
{ PCI_VENDOR_ID_HINT, PCI_DEVICE_ID_HINT_VXPROII_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 6}, { PCI_VENDOR_ID_HINT, PCI_DEVICE_ID_HINT_VXPROII_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 7},
{ PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C561, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 7}, { PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C561, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 8},
{ PCI_VENDOR_ID_OPTI, PCI_DEVICE_ID_OPTI_82C558, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 8}, { PCI_VENDOR_ID_OPTI, PCI_DEVICE_ID_OPTI_82C558, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 9},
#ifdef CONFIG_BLK_DEV_IDE_SATA #ifdef CONFIG_BLK_DEV_IDE_SATA
{ PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237_SATA, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 9}, { PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237_SATA, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 10},
#endif #endif
{ PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 10}, { PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 11},
{ PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 11}, { PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 12},
{ PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 12}, { PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 13},
/* Must come last. If you add entries adjust this table appropriately and the init_one code */
{ PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_STORAGE_IDE << 8, 0xFFFFFF00UL, 0},
{ 0, }, { 0, },
}; };
MODULE_DEVICE_TABLE(pci, generic_pci_tbl); MODULE_DEVICE_TABLE(pci, generic_pci_tbl);
......
此差异已折叠。
此差异已折叠。
...@@ -442,7 +442,7 @@ static unsigned int __devinit init_chipset_svwks (struct pci_dev *dev, const cha ...@@ -442,7 +442,7 @@ static unsigned int __devinit init_chipset_svwks (struct pci_dev *dev, const cha
return (dev->irq) ? dev->irq : 0; return (dev->irq) ? dev->irq : 0;
} }
static unsigned int __init ata66_svwks_svwks (ide_hwif_t *hwif) static unsigned int __devinit ata66_svwks_svwks (ide_hwif_t *hwif)
{ {
return 1; return 1;
} }
...@@ -454,7 +454,7 @@ static unsigned int __init ata66_svwks_svwks (ide_hwif_t *hwif) ...@@ -454,7 +454,7 @@ static unsigned int __init ata66_svwks_svwks (ide_hwif_t *hwif)
* Bit 14 clear = primary IDE channel does not have 80-pin cable. * Bit 14 clear = primary IDE channel does not have 80-pin cable.
* Bit 14 set = primary IDE channel has 80-pin cable. * Bit 14 set = primary IDE channel has 80-pin cable.
*/ */
static unsigned int __init ata66_svwks_dell (ide_hwif_t *hwif) static unsigned int __devinit ata66_svwks_dell (ide_hwif_t *hwif)
{ {
struct pci_dev *dev = hwif->pci_dev; struct pci_dev *dev = hwif->pci_dev;
if (dev->subsystem_vendor == PCI_VENDOR_ID_DELL && if (dev->subsystem_vendor == PCI_VENDOR_ID_DELL &&
...@@ -472,7 +472,7 @@ static unsigned int __init ata66_svwks_dell (ide_hwif_t *hwif) ...@@ -472,7 +472,7 @@ static unsigned int __init ata66_svwks_dell (ide_hwif_t *hwif)
* *
* WARNING: this only works on Alpine hardware! * WARNING: this only works on Alpine hardware!
*/ */
static unsigned int __init ata66_svwks_cobalt (ide_hwif_t *hwif) static unsigned int __devinit ata66_svwks_cobalt (ide_hwif_t *hwif)
{ {
struct pci_dev *dev = hwif->pci_dev; struct pci_dev *dev = hwif->pci_dev;
if (dev->subsystem_vendor == PCI_VENDOR_ID_SUN && if (dev->subsystem_vendor == PCI_VENDOR_ID_SUN &&
...@@ -483,7 +483,7 @@ static unsigned int __init ata66_svwks_cobalt (ide_hwif_t *hwif) ...@@ -483,7 +483,7 @@ static unsigned int __init ata66_svwks_cobalt (ide_hwif_t *hwif)
return 0; return 0;
} }
static unsigned int __init ata66_svwks (ide_hwif_t *hwif) static unsigned int __devinit ata66_svwks (ide_hwif_t *hwif)
{ {
struct pci_dev *dev = hwif->pci_dev; struct pci_dev *dev = hwif->pci_dev;
...@@ -573,7 +573,7 @@ static int __devinit init_setup_svwks (struct pci_dev *dev, ide_pci_device_t *d) ...@@ -573,7 +573,7 @@ static int __devinit init_setup_svwks (struct pci_dev *dev, ide_pci_device_t *d)
return ide_setup_pci_device(dev, d); return ide_setup_pci_device(dev, d);
} }
static int __init init_setup_csb6 (struct pci_dev *dev, ide_pci_device_t *d) static int __devinit init_setup_csb6 (struct pci_dev *dev, ide_pci_device_t *d)
{ {
if (!(PCI_FUNC(dev->devfn) & 1)) { if (!(PCI_FUNC(dev->devfn) & 1)) {
d->bootable = NEVER_BOARD; d->bootable = NEVER_BOARD;
......
...@@ -1324,9 +1324,9 @@ pmac_ide_setup_device(pmac_ide_hwif_t *pmif, ide_hwif_t *hwif) ...@@ -1324,9 +1324,9 @@ pmac_ide_setup_device(pmac_ide_hwif_t *pmif, ide_hwif_t *hwif)
/* XXX FIXME: Media bay stuff need re-organizing */ /* XXX FIXME: Media bay stuff need re-organizing */
if (np->parent && np->parent->name if (np->parent && np->parent->name
&& strcasecmp(np->parent->name, "media-bay") == 0) { && strcasecmp(np->parent->name, "media-bay") == 0) {
#ifdef CONFIG_PMAC_PBOOK #ifdef CONFIG_PMAC_MEDIABAY
media_bay_set_ide_infos(np->parent, pmif->regbase, pmif->irq, hwif->index); media_bay_set_ide_infos(np->parent, pmif->regbase, pmif->irq, hwif->index);
#endif /* CONFIG_PMAC_PBOOK */ #endif /* CONFIG_PMAC_MEDIABAY */
pmif->mediabay = 1; pmif->mediabay = 1;
if (!bidp) if (!bidp)
pmif->aapl_bus_id = 1; pmif->aapl_bus_id = 1;
...@@ -1382,10 +1382,10 @@ pmac_ide_setup_device(pmac_ide_hwif_t *pmif, ide_hwif_t *hwif) ...@@ -1382,10 +1382,10 @@ pmac_ide_setup_device(pmac_ide_hwif_t *pmif, ide_hwif_t *hwif)
hwif->index, model_name[pmif->kind], pmif->aapl_bus_id, hwif->index, model_name[pmif->kind], pmif->aapl_bus_id,
pmif->mediabay ? " (mediabay)" : "", hwif->irq); pmif->mediabay ? " (mediabay)" : "", hwif->irq);
#ifdef CONFIG_PMAC_PBOOK #ifdef CONFIG_PMAC_MEDIABAY
if (pmif->mediabay && check_media_bay_by_base(pmif->regbase, MB_CD) == 0) if (pmif->mediabay && check_media_bay_by_base(pmif->regbase, MB_CD) == 0)
hwif->noprobe = 0; hwif->noprobe = 0;
#endif /* CONFIG_PMAC_PBOOK */ #endif /* CONFIG_PMAC_MEDIABAY */
hwif->sg_max_nents = MAX_DCMDS; hwif->sg_max_nents = MAX_DCMDS;
......
...@@ -3538,8 +3538,8 @@ static void ohci1394_pci_remove(struct pci_dev *pdev) ...@@ -3538,8 +3538,8 @@ static void ohci1394_pci_remove(struct pci_dev *pdev)
static int ohci1394_pci_resume (struct pci_dev *pdev) static int ohci1394_pci_resume (struct pci_dev *pdev)
{ {
#ifdef CONFIG_PMAC_PBOOK #ifdef CONFIG_PPC_PMAC
{ if (_machine == _MACH_Pmac) {
struct device_node *of_node; struct device_node *of_node;
/* Re-enable 1394 */ /* Re-enable 1394 */
...@@ -3547,7 +3547,7 @@ static int ohci1394_pci_resume (struct pci_dev *pdev) ...@@ -3547,7 +3547,7 @@ static int ohci1394_pci_resume (struct pci_dev *pdev)
if (of_node) if (of_node)
pmac_call_feature (PMAC_FTR_1394_ENABLE, of_node, 0, 1); pmac_call_feature (PMAC_FTR_1394_ENABLE, of_node, 0, 1);
} }
#endif #endif /* CONFIG_PPC_PMAC */
pci_enable_device(pdev); pci_enable_device(pdev);
...@@ -3557,8 +3557,8 @@ static int ohci1394_pci_resume (struct pci_dev *pdev) ...@@ -3557,8 +3557,8 @@ static int ohci1394_pci_resume (struct pci_dev *pdev)
static int ohci1394_pci_suspend (struct pci_dev *pdev, pm_message_t state) static int ohci1394_pci_suspend (struct pci_dev *pdev, pm_message_t state)
{ {
#ifdef CONFIG_PMAC_PBOOK #ifdef CONFIG_PPC_PMAC
{ if (_machine == _MACH_Pmac) {
struct device_node *of_node; struct device_node *of_node;
/* Disable 1394 */ /* Disable 1394 */
......
...@@ -96,7 +96,7 @@ void ib_pack(const struct ib_field *desc, ...@@ -96,7 +96,7 @@ void ib_pack(const struct ib_field *desc,
else else
val = 0; val = 0;
mask = cpu_to_be64(((1ull << desc[i].size_bits) - 1) << shift); mask = cpu_to_be64((~0ull >> (64 - desc[i].size_bits)) << shift);
addr = (__be64 *) ((__be32 *) buf + desc[i].offset_words); addr = (__be64 *) ((__be32 *) buf + desc[i].offset_words);
*addr = (*addr & ~mask) | (cpu_to_be64(val) & mask); *addr = (*addr & ~mask) | (cpu_to_be64(val) & mask);
} else { } else {
...@@ -176,7 +176,7 @@ void ib_unpack(const struct ib_field *desc, ...@@ -176,7 +176,7 @@ void ib_unpack(const struct ib_field *desc,
__be64 *addr; __be64 *addr;
shift = 64 - desc[i].offset_bits - desc[i].size_bits; shift = 64 - desc[i].offset_bits - desc[i].size_bits;
mask = ((1ull << desc[i].size_bits) - 1) << shift; mask = (~0ull >> (64 - desc[i].size_bits)) << shift;
addr = (__be64 *) buf + desc[i].offset_words; addr = (__be64 *) buf + desc[i].offset_words;
val = (be64_to_cpup(addr) & mask) >> shift; val = (be64_to_cpup(addr) & mask) >> shift;
value_write(desc[i].struct_offset_bytes, value_write(desc[i].struct_offset_bytes,
......
...@@ -507,7 +507,13 @@ static int send_mad(struct ib_sa_query *query, int timeout_ms) ...@@ -507,7 +507,13 @@ static int send_mad(struct ib_sa_query *query, int timeout_ms)
spin_unlock_irqrestore(&idr_lock, flags); spin_unlock_irqrestore(&idr_lock, flags);
} }
return ret; /*
* It's not safe to dereference query any more, because the
* send may already have completed and freed the query in
* another context. So use wr.wr_id, which has a copy of the
* query's id.
*/
return ret ? ret : wr.wr_id;
} }
static void ib_sa_path_rec_callback(struct ib_sa_query *sa_query, static void ib_sa_path_rec_callback(struct ib_sa_query *sa_query,
...@@ -598,14 +604,15 @@ int ib_sa_path_rec_get(struct ib_device *device, u8 port_num, ...@@ -598,14 +604,15 @@ int ib_sa_path_rec_get(struct ib_device *device, u8 port_num,
rec, query->sa_query.mad->data); rec, query->sa_query.mad->data);
*sa_query = &query->sa_query; *sa_query = &query->sa_query;
ret = send_mad(&query->sa_query, timeout_ms); ret = send_mad(&query->sa_query, timeout_ms);
if (ret) { if (ret < 0) {
*sa_query = NULL; *sa_query = NULL;
kfree(query->sa_query.mad); kfree(query->sa_query.mad);
kfree(query); kfree(query);
} }
return ret ? ret : query->sa_query.id; return ret;
} }
EXPORT_SYMBOL(ib_sa_path_rec_get); EXPORT_SYMBOL(ib_sa_path_rec_get);
...@@ -674,14 +681,15 @@ int ib_sa_mcmember_rec_query(struct ib_device *device, u8 port_num, ...@@ -674,14 +681,15 @@ int ib_sa_mcmember_rec_query(struct ib_device *device, u8 port_num,
rec, query->sa_query.mad->data); rec, query->sa_query.mad->data);
*sa_query = &query->sa_query; *sa_query = &query->sa_query;
ret = send_mad(&query->sa_query, timeout_ms); ret = send_mad(&query->sa_query, timeout_ms);
if (ret) { if (ret < 0) {
*sa_query = NULL; *sa_query = NULL;
kfree(query->sa_query.mad); kfree(query->sa_query.mad);
kfree(query); kfree(query);
} }
return ret ? ret : query->sa_query.id; return ret;
} }
EXPORT_SYMBOL(ib_sa_mcmember_rec_query); EXPORT_SYMBOL(ib_sa_mcmember_rec_query);
......
/* /*
* Copyright (c) 2004 Topspin Communications. All rights reserved. * Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* *
* This software is available to you under a choice of one of two * This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU * licenses. You may choose to be licensed under the terms of the GNU
......
...@@ -37,8 +37,7 @@ ...@@ -37,8 +37,7 @@
#include <ib_verbs.h> #include <ib_verbs.h>
#define MTHCA_CMD_MAILBOX_ALIGN 16UL #define MTHCA_MAILBOX_SIZE 4096
#define MTHCA_CMD_MAILBOX_EXTRA (MTHCA_CMD_MAILBOX_ALIGN - 1)
enum { enum {
/* command completed successfully: */ /* command completed successfully: */
...@@ -112,6 +111,11 @@ enum { ...@@ -112,6 +111,11 @@ enum {
DEV_LIM_FLAG_UD_MULTI = 1 << 21, DEV_LIM_FLAG_UD_MULTI = 1 << 21,
}; };
struct mthca_mailbox {
dma_addr_t dma;
void *buf;
};
struct mthca_dev_lim { struct mthca_dev_lim {
int max_srq_sz; int max_srq_sz;
int max_qp_sz; int max_qp_sz;
...@@ -235,11 +239,17 @@ struct mthca_set_ib_param { ...@@ -235,11 +239,17 @@ struct mthca_set_ib_param {
u32 cap_mask; u32 cap_mask;
}; };
int mthca_cmd_init(struct mthca_dev *dev);
void mthca_cmd_cleanup(struct mthca_dev *dev);
int mthca_cmd_use_events(struct mthca_dev *dev); int mthca_cmd_use_events(struct mthca_dev *dev);
void mthca_cmd_use_polling(struct mthca_dev *dev); void mthca_cmd_use_polling(struct mthca_dev *dev);
void mthca_cmd_event(struct mthca_dev *dev, u16 token, void mthca_cmd_event(struct mthca_dev *dev, u16 token,
u8 status, u64 out_param); u8 status, u64 out_param);
struct mthca_mailbox *mthca_alloc_mailbox(struct mthca_dev *dev,
unsigned int gfp_mask);
void mthca_free_mailbox(struct mthca_dev *dev, struct mthca_mailbox *mailbox);
int mthca_SYS_EN(struct mthca_dev *dev, u8 *status); int mthca_SYS_EN(struct mthca_dev *dev, u8 *status);
int mthca_SYS_DIS(struct mthca_dev *dev, u8 *status); int mthca_SYS_DIS(struct mthca_dev *dev, u8 *status);
int mthca_MAP_FA(struct mthca_dev *dev, struct mthca_icm *icm, u8 *status); int mthca_MAP_FA(struct mthca_dev *dev, struct mthca_icm *icm, u8 *status);
...@@ -270,41 +280,39 @@ int mthca_MAP_ICM_AUX(struct mthca_dev *dev, struct mthca_icm *icm, u8 *status); ...@@ -270,41 +280,39 @@ int mthca_MAP_ICM_AUX(struct mthca_dev *dev, struct mthca_icm *icm, u8 *status);
int mthca_UNMAP_ICM_AUX(struct mthca_dev *dev, u8 *status); int mthca_UNMAP_ICM_AUX(struct mthca_dev *dev, u8 *status);
int mthca_SET_ICM_SIZE(struct mthca_dev *dev, u64 icm_size, u64 *aux_pages, int mthca_SET_ICM_SIZE(struct mthca_dev *dev, u64 icm_size, u64 *aux_pages,
u8 *status); u8 *status);
int mthca_SW2HW_MPT(struct mthca_dev *dev, void *mpt_entry, int mthca_SW2HW_MPT(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int mpt_index, u8 *status); int mpt_index, u8 *status);
int mthca_HW2SW_MPT(struct mthca_dev *dev, void *mpt_entry, int mthca_HW2SW_MPT(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int mpt_index, u8 *status); int mpt_index, u8 *status);
int mthca_WRITE_MTT(struct mthca_dev *dev, u64 *mtt_entry, int mthca_WRITE_MTT(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int num_mtt, u8 *status); int num_mtt, u8 *status);
int mthca_SYNC_TPT(struct mthca_dev *dev, u8 *status); int mthca_SYNC_TPT(struct mthca_dev *dev, u8 *status);
int mthca_MAP_EQ(struct mthca_dev *dev, u64 event_mask, int unmap, int mthca_MAP_EQ(struct mthca_dev *dev, u64 event_mask, int unmap,
int eq_num, u8 *status); int eq_num, u8 *status);
int mthca_SW2HW_EQ(struct mthca_dev *dev, void *eq_context, int mthca_SW2HW_EQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int eq_num, u8 *status); int eq_num, u8 *status);
int mthca_HW2SW_EQ(struct mthca_dev *dev, void *eq_context, int mthca_HW2SW_EQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int eq_num, u8 *status); int eq_num, u8 *status);
int mthca_SW2HW_CQ(struct mthca_dev *dev, void *cq_context, int mthca_SW2HW_CQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int cq_num, u8 *status); int cq_num, u8 *status);
int mthca_HW2SW_CQ(struct mthca_dev *dev, void *cq_context, int mthca_HW2SW_CQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int cq_num, u8 *status); int cq_num, u8 *status);
int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num, int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num,
int is_ee, void *qp_context, u32 optmask, int is_ee, struct mthca_mailbox *mailbox, u32 optmask,
u8 *status); u8 *status);
int mthca_QUERY_QP(struct mthca_dev *dev, u32 num, int is_ee, int mthca_QUERY_QP(struct mthca_dev *dev, u32 num, int is_ee,
void *qp_context, u8 *status); struct mthca_mailbox *mailbox, u8 *status);
int mthca_CONF_SPECIAL_QP(struct mthca_dev *dev, int type, u32 qpn, int mthca_CONF_SPECIAL_QP(struct mthca_dev *dev, int type, u32 qpn,
u8 *status); u8 *status);
int mthca_MAD_IFC(struct mthca_dev *dev, int ignore_mkey, int ignore_bkey, int mthca_MAD_IFC(struct mthca_dev *dev, int ignore_mkey, int ignore_bkey,
int port, struct ib_wc* in_wc, struct ib_grh* in_grh, int port, struct ib_wc *in_wc, struct ib_grh *in_grh,
void *in_mad, void *response_mad, u8 *status); void *in_mad, void *response_mad, u8 *status);
int mthca_READ_MGM(struct mthca_dev *dev, int index, void *mgm, int mthca_READ_MGM(struct mthca_dev *dev, int index,
u8 *status); struct mthca_mailbox *mailbox, u8 *status);
int mthca_WRITE_MGM(struct mthca_dev *dev, int index, void *mgm, int mthca_WRITE_MGM(struct mthca_dev *dev, int index,
u8 *status); struct mthca_mailbox *mailbox, u8 *status);
int mthca_MGID_HASH(struct mthca_dev *dev, void *gid, u16 *hash, int mthca_MGID_HASH(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
u8 *status); u16 *hash, u8 *status);
int mthca_NOP(struct mthca_dev *dev, u8 *status); int mthca_NOP(struct mthca_dev *dev, u8 *status);
#define MAILBOX_ALIGN(x) ((void *) ALIGN((unsigned long) (x), MTHCA_CMD_MAILBOX_ALIGN))
#endif /* MTHCA_CMD_H */ #endif /* MTHCA_CMD_H */
/* /*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* *
* This software is available to you under a choice of one of two * This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU * licenses. You may choose to be licensed under the terms of the GNU
...@@ -171,6 +172,17 @@ static inline void set_cqe_hw(struct mthca_cqe *cqe) ...@@ -171,6 +172,17 @@ static inline void set_cqe_hw(struct mthca_cqe *cqe)
cqe->owner = MTHCA_CQ_ENTRY_OWNER_HW; cqe->owner = MTHCA_CQ_ENTRY_OWNER_HW;
} }
static void dump_cqe(struct mthca_dev *dev, void *cqe_ptr)
{
__be32 *cqe = cqe_ptr;
(void) cqe; /* avoid warning if mthca_dbg compiled away... */
mthca_dbg(dev, "CQE contents %08x %08x %08x %08x %08x %08x %08x %08x\n",
be32_to_cpu(cqe[0]), be32_to_cpu(cqe[1]), be32_to_cpu(cqe[2]),
be32_to_cpu(cqe[3]), be32_to_cpu(cqe[4]), be32_to_cpu(cqe[5]),
be32_to_cpu(cqe[6]), be32_to_cpu(cqe[7]));
}
/* /*
* incr is ignored in native Arbel (mem-free) mode, so cq->cons_index * incr is ignored in native Arbel (mem-free) mode, so cq->cons_index
* should be correct before calling update_cons_index(). * should be correct before calling update_cons_index().
...@@ -280,16 +292,12 @@ static int handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq, ...@@ -280,16 +292,12 @@ static int handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq,
int dbd; int dbd;
u32 new_wqe; u32 new_wqe;
if (1 && cqe->syndrome != SYNDROME_WR_FLUSH_ERR) { if (cqe->syndrome == SYNDROME_LOCAL_QP_OP_ERR) {
int j; mthca_dbg(dev, "local QP operation err "
"(QPN %06x, WQE @ %08x, CQN %06x, index %d)\n",
mthca_dbg(dev, "%x/%d: error CQE -> QPN %06x, WQE @ %08x\n", be32_to_cpu(cqe->my_qpn), be32_to_cpu(cqe->wqe),
cq->cqn, cq->cons_index, be32_to_cpu(cqe->my_qpn), cq->cqn, cq->cons_index);
be32_to_cpu(cqe->wqe)); dump_cqe(dev, cqe);
for (j = 0; j < 8; ++j)
printk(KERN_DEBUG " [%2x] %08x\n",
j * 4, be32_to_cpu(((u32 *) cqe)[j]));
} }
/* /*
...@@ -377,15 +385,6 @@ static int handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq, ...@@ -377,15 +385,6 @@ static int handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq,
return 0; return 0;
} }
static void dump_cqe(struct mthca_cqe *cqe)
{
int j;
for (j = 0; j < 8; ++j)
printk(KERN_DEBUG " [%2x] %08x\n",
j * 4, be32_to_cpu(((u32 *) cqe)[j]));
}
static inline int mthca_poll_one(struct mthca_dev *dev, static inline int mthca_poll_one(struct mthca_dev *dev,
struct mthca_cq *cq, struct mthca_cq *cq,
struct mthca_qp **cur_qp, struct mthca_qp **cur_qp,
...@@ -414,8 +413,7 @@ static inline int mthca_poll_one(struct mthca_dev *dev, ...@@ -414,8 +413,7 @@ static inline int mthca_poll_one(struct mthca_dev *dev,
mthca_dbg(dev, "%x/%d: CQE -> QPN %06x, WQE @ %08x\n", mthca_dbg(dev, "%x/%d: CQE -> QPN %06x, WQE @ %08x\n",
cq->cqn, cq->cons_index, be32_to_cpu(cqe->my_qpn), cq->cqn, cq->cons_index, be32_to_cpu(cqe->my_qpn),
be32_to_cpu(cqe->wqe)); be32_to_cpu(cqe->wqe));
dump_cqe(dev, cqe);
dump_cqe(cqe);
} }
is_error = (cqe->opcode & MTHCA_ERROR_CQE_OPCODE_MASK) == is_error = (cqe->opcode & MTHCA_ERROR_CQE_OPCODE_MASK) ==
...@@ -638,19 +636,19 @@ static void mthca_free_cq_buf(struct mthca_dev *dev, struct mthca_cq *cq) ...@@ -638,19 +636,19 @@ static void mthca_free_cq_buf(struct mthca_dev *dev, struct mthca_cq *cq)
int size; int size;
if (cq->is_direct) if (cq->is_direct)
pci_free_consistent(dev->pdev, dma_free_coherent(&dev->pdev->dev,
(cq->ibcq.cqe + 1) * MTHCA_CQ_ENTRY_SIZE, (cq->ibcq.cqe + 1) * MTHCA_CQ_ENTRY_SIZE,
cq->queue.direct.buf, cq->queue.direct.buf,
pci_unmap_addr(&cq->queue.direct, pci_unmap_addr(&cq->queue.direct,
mapping)); mapping));
else { else {
size = (cq->ibcq.cqe + 1) * MTHCA_CQ_ENTRY_SIZE; size = (cq->ibcq.cqe + 1) * MTHCA_CQ_ENTRY_SIZE;
for (i = 0; i < (size + PAGE_SIZE - 1) / PAGE_SIZE; ++i) for (i = 0; i < (size + PAGE_SIZE - 1) / PAGE_SIZE; ++i)
if (cq->queue.page_list[i].buf) if (cq->queue.page_list[i].buf)
pci_free_consistent(dev->pdev, PAGE_SIZE, dma_free_coherent(&dev->pdev->dev, PAGE_SIZE,
cq->queue.page_list[i].buf, cq->queue.page_list[i].buf,
pci_unmap_addr(&cq->queue.page_list[i], pci_unmap_addr(&cq->queue.page_list[i],
mapping)); mapping));
kfree(cq->queue.page_list); kfree(cq->queue.page_list);
} }
...@@ -670,8 +668,8 @@ static int mthca_alloc_cq_buf(struct mthca_dev *dev, int size, ...@@ -670,8 +668,8 @@ static int mthca_alloc_cq_buf(struct mthca_dev *dev, int size,
npages = 1; npages = 1;
shift = get_order(size) + PAGE_SHIFT; shift = get_order(size) + PAGE_SHIFT;
cq->queue.direct.buf = pci_alloc_consistent(dev->pdev, cq->queue.direct.buf = dma_alloc_coherent(&dev->pdev->dev,
size, &t); size, &t, GFP_KERNEL);
if (!cq->queue.direct.buf) if (!cq->queue.direct.buf)
return -ENOMEM; return -ENOMEM;
...@@ -709,7 +707,8 @@ static int mthca_alloc_cq_buf(struct mthca_dev *dev, int size, ...@@ -709,7 +707,8 @@ static int mthca_alloc_cq_buf(struct mthca_dev *dev, int size,
for (i = 0; i < npages; ++i) { for (i = 0; i < npages; ++i) {
cq->queue.page_list[i].buf = cq->queue.page_list[i].buf =
pci_alloc_consistent(dev->pdev, PAGE_SIZE, &t); dma_alloc_coherent(&dev->pdev->dev, PAGE_SIZE,
&t, GFP_KERNEL);
if (!cq->queue.page_list[i].buf) if (!cq->queue.page_list[i].buf)
goto err_free; goto err_free;
...@@ -746,7 +745,7 @@ int mthca_init_cq(struct mthca_dev *dev, int nent, ...@@ -746,7 +745,7 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
struct mthca_cq *cq) struct mthca_cq *cq)
{ {
int size = nent * MTHCA_CQ_ENTRY_SIZE; int size = nent * MTHCA_CQ_ENTRY_SIZE;
void *mailbox = NULL; struct mthca_mailbox *mailbox;
struct mthca_cq_context *cq_context; struct mthca_cq_context *cq_context;
int err = -ENOMEM; int err = -ENOMEM;
u8 status; u8 status;
...@@ -780,12 +779,11 @@ int mthca_init_cq(struct mthca_dev *dev, int nent, ...@@ -780,12 +779,11 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
goto err_out_ci; goto err_out_ci;
} }
mailbox = kmalloc(sizeof (struct mthca_cq_context) + MTHCA_CMD_MAILBOX_EXTRA, mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL);
GFP_KERNEL); if (IS_ERR(mailbox))
if (!mailbox) goto err_out_arm;
goto err_out_mailbox;
cq_context = MAILBOX_ALIGN(mailbox); cq_context = mailbox->buf;
err = mthca_alloc_cq_buf(dev, size, cq); err = mthca_alloc_cq_buf(dev, size, cq);
if (err) if (err)
...@@ -816,7 +814,7 @@ int mthca_init_cq(struct mthca_dev *dev, int nent, ...@@ -816,7 +814,7 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
cq_context->state_db = cpu_to_be32(cq->arm_db_index); cq_context->state_db = cpu_to_be32(cq->arm_db_index);
} }
err = mthca_SW2HW_CQ(dev, cq_context, cq->cqn, &status); err = mthca_SW2HW_CQ(dev, mailbox, cq->cqn, &status);
if (err) { if (err) {
mthca_warn(dev, "SW2HW_CQ failed (%d)\n", err); mthca_warn(dev, "SW2HW_CQ failed (%d)\n", err);
goto err_out_free_mr; goto err_out_free_mr;
...@@ -840,7 +838,7 @@ int mthca_init_cq(struct mthca_dev *dev, int nent, ...@@ -840,7 +838,7 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
cq->cons_index = 0; cq->cons_index = 0;
kfree(mailbox); mthca_free_mailbox(dev, mailbox);
return 0; return 0;
...@@ -849,8 +847,9 @@ int mthca_init_cq(struct mthca_dev *dev, int nent, ...@@ -849,8 +847,9 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
mthca_free_cq_buf(dev, cq); mthca_free_cq_buf(dev, cq);
err_out_mailbox: err_out_mailbox:
kfree(mailbox); mthca_free_mailbox(dev, mailbox);
err_out_arm:
if (mthca_is_memfree(dev)) if (mthca_is_memfree(dev))
mthca_free_db(dev, MTHCA_DB_TYPE_CQ_ARM, cq->arm_db_index); mthca_free_db(dev, MTHCA_DB_TYPE_CQ_ARM, cq->arm_db_index);
...@@ -870,28 +869,26 @@ int mthca_init_cq(struct mthca_dev *dev, int nent, ...@@ -870,28 +869,26 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
void mthca_free_cq(struct mthca_dev *dev, void mthca_free_cq(struct mthca_dev *dev,
struct mthca_cq *cq) struct mthca_cq *cq)
{ {
void *mailbox; struct mthca_mailbox *mailbox;
int err; int err;
u8 status; u8 status;
might_sleep(); might_sleep();
mailbox = kmalloc(sizeof (struct mthca_cq_context) + MTHCA_CMD_MAILBOX_EXTRA, mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL);
GFP_KERNEL); if (IS_ERR(mailbox)) {
if (!mailbox) {
mthca_warn(dev, "No memory for mailbox to free CQ.\n"); mthca_warn(dev, "No memory for mailbox to free CQ.\n");
return; return;
} }
err = mthca_HW2SW_CQ(dev, MAILBOX_ALIGN(mailbox), cq->cqn, &status); err = mthca_HW2SW_CQ(dev, mailbox, cq->cqn, &status);
if (err) if (err)
mthca_warn(dev, "HW2SW_CQ failed (%d)\n", err); mthca_warn(dev, "HW2SW_CQ failed (%d)\n", err);
else if (status) else if (status)
mthca_warn(dev, "HW2SW_CQ returned status 0x%02x\n", mthca_warn(dev, "HW2SW_CQ returned status 0x%02x\n", status);
status);
if (0) { if (0) {
u32 *ctx = MAILBOX_ALIGN(mailbox); u32 *ctx = mailbox->buf;
int j; int j;
printk(KERN_ERR "context for CQN %x (cons index %x, next sw %d)\n", printk(KERN_ERR "context for CQN %x (cons index %x, next sw %d)\n",
...@@ -919,11 +916,11 @@ void mthca_free_cq(struct mthca_dev *dev, ...@@ -919,11 +916,11 @@ void mthca_free_cq(struct mthca_dev *dev,
if (mthca_is_memfree(dev)) { if (mthca_is_memfree(dev)) {
mthca_free_db(dev, MTHCA_DB_TYPE_CQ_ARM, cq->arm_db_index); mthca_free_db(dev, MTHCA_DB_TYPE_CQ_ARM, cq->arm_db_index);
mthca_free_db(dev, MTHCA_DB_TYPE_CQ_SET_CI, cq->set_ci_db_index); mthca_free_db(dev, MTHCA_DB_TYPE_CQ_SET_CI, cq->set_ci_db_index);
mthca_table_put(dev, dev->cq_table.table, cq->cqn);
} }
mthca_table_put(dev, dev->cq_table.table, cq->cqn);
mthca_free(&dev->cq_table.alloc, cq->cqn); mthca_free(&dev->cq_table.alloc, cq->cqn);
kfree(mailbox); mthca_free_mailbox(dev, mailbox);
} }
int __devinit mthca_init_cq_table(struct mthca_dev *dev) int __devinit mthca_init_cq_table(struct mthca_dev *dev)
......
/* /*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved. * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* *
* This software is available to you under a choice of one of two * This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU * licenses. You may choose to be licensed under the terms of the GNU
...@@ -46,8 +47,8 @@ ...@@ -46,8 +47,8 @@
#define DRV_NAME "ib_mthca" #define DRV_NAME "ib_mthca"
#define PFX DRV_NAME ": " #define PFX DRV_NAME ": "
#define DRV_VERSION "0.06-pre" #define DRV_VERSION "0.06"
#define DRV_RELDATE "November 8, 2004" #define DRV_RELDATE "June 23, 2005"
enum { enum {
MTHCA_FLAG_DDR_HIDDEN = 1 << 1, MTHCA_FLAG_DDR_HIDDEN = 1 << 1,
...@@ -98,6 +99,7 @@ enum { ...@@ -98,6 +99,7 @@ enum {
}; };
struct mthca_cmd { struct mthca_cmd {
struct pci_pool *pool;
int use_events; int use_events;
struct semaphore hcr_sem; struct semaphore hcr_sem;
struct semaphore poll_sem; struct semaphore poll_sem;
...@@ -379,6 +381,12 @@ void mthca_uar_free(struct mthca_dev *dev, struct mthca_uar *uar); ...@@ -379,6 +381,12 @@ void mthca_uar_free(struct mthca_dev *dev, struct mthca_uar *uar);
int mthca_pd_alloc(struct mthca_dev *dev, struct mthca_pd *pd); int mthca_pd_alloc(struct mthca_dev *dev, struct mthca_pd *pd);
void mthca_pd_free(struct mthca_dev *dev, struct mthca_pd *pd); void mthca_pd_free(struct mthca_dev *dev, struct mthca_pd *pd);
struct mthca_mtt *mthca_alloc_mtt(struct mthca_dev *dev, int size);
void mthca_free_mtt(struct mthca_dev *dev, struct mthca_mtt *mtt);
int mthca_write_mtt(struct mthca_dev *dev, struct mthca_mtt *mtt,
int start_index, u64 *buffer_list, int list_len);
int mthca_mr_alloc(struct mthca_dev *dev, u32 pd, int buffer_size_shift,
u64 iova, u64 total_size, u32 access, struct mthca_mr *mr);
int mthca_mr_alloc_notrans(struct mthca_dev *dev, u32 pd, int mthca_mr_alloc_notrans(struct mthca_dev *dev, u32 pd,
u32 access, struct mthca_mr *mr); u32 access, struct mthca_mr *mr);
int mthca_mr_alloc_phys(struct mthca_dev *dev, u32 pd, int mthca_mr_alloc_phys(struct mthca_dev *dev, u32 pd,
......
/* /*
* Copyright (c) 2004 Topspin Communications. All rights reserved. * Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* *
* This software is available to you under a choice of one of two * This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU * licenses. You may choose to be licensed under the terms of the GNU
......
...@@ -469,7 +469,7 @@ static int __devinit mthca_create_eq(struct mthca_dev *dev, ...@@ -469,7 +469,7 @@ static int __devinit mthca_create_eq(struct mthca_dev *dev,
PAGE_SIZE; PAGE_SIZE;
u64 *dma_list = NULL; u64 *dma_list = NULL;
dma_addr_t t; dma_addr_t t;
void *mailbox = NULL; struct mthca_mailbox *mailbox;
struct mthca_eq_context *eq_context; struct mthca_eq_context *eq_context;
int err = -ENOMEM; int err = -ENOMEM;
int i; int i;
...@@ -494,17 +494,16 @@ static int __devinit mthca_create_eq(struct mthca_dev *dev, ...@@ -494,17 +494,16 @@ static int __devinit mthca_create_eq(struct mthca_dev *dev,
if (!dma_list) if (!dma_list)
goto err_out_free; goto err_out_free;
mailbox = kmalloc(sizeof *eq_context + MTHCA_CMD_MAILBOX_EXTRA, mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL);
GFP_KERNEL); if (IS_ERR(mailbox))
if (!mailbox)
goto err_out_free; goto err_out_free;
eq_context = MAILBOX_ALIGN(mailbox); eq_context = mailbox->buf;
for (i = 0; i < npages; ++i) { for (i = 0; i < npages; ++i) {
eq->page_list[i].buf = pci_alloc_consistent(dev->pdev, eq->page_list[i].buf = dma_alloc_coherent(&dev->pdev->dev,
PAGE_SIZE, &t); PAGE_SIZE, &t, GFP_KERNEL);
if (!eq->page_list[i].buf) if (!eq->page_list[i].buf)
goto err_out_free; goto err_out_free_pages;
dma_list[i] = t; dma_list[i] = t;
pci_unmap_addr_set(&eq->page_list[i], mapping, t); pci_unmap_addr_set(&eq->page_list[i], mapping, t);
...@@ -517,7 +516,7 @@ static int __devinit mthca_create_eq(struct mthca_dev *dev, ...@@ -517,7 +516,7 @@ static int __devinit mthca_create_eq(struct mthca_dev *dev,
eq->eqn = mthca_alloc(&dev->eq_table.alloc); eq->eqn = mthca_alloc(&dev->eq_table.alloc);
if (eq->eqn == -1) if (eq->eqn == -1)
goto err_out_free; goto err_out_free_pages;
err = mthca_mr_alloc_phys(dev, dev->driver_pd.pd_num, err = mthca_mr_alloc_phys(dev, dev->driver_pd.pd_num,
dma_list, PAGE_SHIFT, npages, dma_list, PAGE_SHIFT, npages,
...@@ -548,7 +547,7 @@ static int __devinit mthca_create_eq(struct mthca_dev *dev, ...@@ -548,7 +547,7 @@ static int __devinit mthca_create_eq(struct mthca_dev *dev,
eq_context->intr = intr; eq_context->intr = intr;
eq_context->lkey = cpu_to_be32(eq->mr.ibmr.lkey); eq_context->lkey = cpu_to_be32(eq->mr.ibmr.lkey);
err = mthca_SW2HW_EQ(dev, eq_context, eq->eqn, &status); err = mthca_SW2HW_EQ(dev, mailbox, eq->eqn, &status);
if (err) { if (err) {
mthca_warn(dev, "SW2HW_EQ failed (%d)\n", err); mthca_warn(dev, "SW2HW_EQ failed (%d)\n", err);
goto err_out_free_mr; goto err_out_free_mr;
...@@ -561,7 +560,7 @@ static int __devinit mthca_create_eq(struct mthca_dev *dev, ...@@ -561,7 +560,7 @@ static int __devinit mthca_create_eq(struct mthca_dev *dev,
} }
kfree(dma_list); kfree(dma_list);
kfree(mailbox); mthca_free_mailbox(dev, mailbox);
eq->eqn_mask = swab32(1 << eq->eqn); eq->eqn_mask = swab32(1 << eq->eqn);
eq->cons_index = 0; eq->cons_index = 0;
...@@ -579,17 +578,19 @@ static int __devinit mthca_create_eq(struct mthca_dev *dev, ...@@ -579,17 +578,19 @@ static int __devinit mthca_create_eq(struct mthca_dev *dev,
err_out_free_eq: err_out_free_eq:
mthca_free(&dev->eq_table.alloc, eq->eqn); mthca_free(&dev->eq_table.alloc, eq->eqn);
err_out_free: err_out_free_pages:
for (i = 0; i < npages; ++i) for (i = 0; i < npages; ++i)
if (eq->page_list[i].buf) if (eq->page_list[i].buf)
pci_free_consistent(dev->pdev, PAGE_SIZE, dma_free_coherent(&dev->pdev->dev, PAGE_SIZE,
eq->page_list[i].buf, eq->page_list[i].buf,
pci_unmap_addr(&eq->page_list[i], pci_unmap_addr(&eq->page_list[i],
mapping)); mapping));
mthca_free_mailbox(dev, mailbox);
err_out_free:
kfree(eq->page_list); kfree(eq->page_list);
kfree(dma_list); kfree(dma_list);
kfree(mailbox);
err_out: err_out:
return err; return err;
...@@ -598,25 +599,22 @@ static int __devinit mthca_create_eq(struct mthca_dev *dev, ...@@ -598,25 +599,22 @@ static int __devinit mthca_create_eq(struct mthca_dev *dev,
static void mthca_free_eq(struct mthca_dev *dev, static void mthca_free_eq(struct mthca_dev *dev,
struct mthca_eq *eq) struct mthca_eq *eq)
{ {
void *mailbox = NULL; struct mthca_mailbox *mailbox;
int err; int err;
u8 status; u8 status;
int npages = (eq->nent * MTHCA_EQ_ENTRY_SIZE + PAGE_SIZE - 1) / int npages = (eq->nent * MTHCA_EQ_ENTRY_SIZE + PAGE_SIZE - 1) /
PAGE_SIZE; PAGE_SIZE;
int i; int i;
mailbox = kmalloc(sizeof (struct mthca_eq_context) + MTHCA_CMD_MAILBOX_EXTRA, mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL);
GFP_KERNEL); if (IS_ERR(mailbox))
if (!mailbox)
return; return;
err = mthca_HW2SW_EQ(dev, MAILBOX_ALIGN(mailbox), err = mthca_HW2SW_EQ(dev, mailbox, eq->eqn, &status);
eq->eqn, &status);
if (err) if (err)
mthca_warn(dev, "HW2SW_EQ failed (%d)\n", err); mthca_warn(dev, "HW2SW_EQ failed (%d)\n", err);
if (status) if (status)
mthca_warn(dev, "HW2SW_EQ returned status 0x%02x\n", mthca_warn(dev, "HW2SW_EQ returned status 0x%02x\n", status);
status);
dev->eq_table.arm_mask &= ~eq->eqn_mask; dev->eq_table.arm_mask &= ~eq->eqn_mask;
...@@ -625,7 +623,7 @@ static void mthca_free_eq(struct mthca_dev *dev, ...@@ -625,7 +623,7 @@ static void mthca_free_eq(struct mthca_dev *dev,
for (i = 0; i < sizeof (struct mthca_eq_context) / 4; ++i) { for (i = 0; i < sizeof (struct mthca_eq_context) / 4; ++i) {
if (i % 4 == 0) if (i % 4 == 0)
printk("[%02x] ", i * 4); printk("[%02x] ", i * 4);
printk(" %08x", be32_to_cpup(MAILBOX_ALIGN(mailbox) + i * 4)); printk(" %08x", be32_to_cpup(mailbox->buf + i * 4));
if ((i + 1) % 4 == 0) if ((i + 1) % 4 == 0)
printk("\n"); printk("\n");
} }
...@@ -638,7 +636,7 @@ static void mthca_free_eq(struct mthca_dev *dev, ...@@ -638,7 +636,7 @@ static void mthca_free_eq(struct mthca_dev *dev,
pci_unmap_addr(&eq->page_list[i], mapping)); pci_unmap_addr(&eq->page_list[i], mapping));
kfree(eq->page_list); kfree(eq->page_list);
kfree(mailbox); mthca_free_mailbox(dev, mailbox);
} }
static void mthca_free_irqs(struct mthca_dev *dev) static void mthca_free_irqs(struct mthca_dev *dev)
...@@ -709,8 +707,7 @@ static int __devinit mthca_map_eq_regs(struct mthca_dev *dev) ...@@ -709,8 +707,7 @@ static int __devinit mthca_map_eq_regs(struct mthca_dev *dev)
if (mthca_map_reg(dev, ((pci_resource_len(dev->pdev, 0) - 1) & if (mthca_map_reg(dev, ((pci_resource_len(dev->pdev, 0) - 1) &
dev->fw.arbel.eq_arm_base) + 4, 4, dev->fw.arbel.eq_arm_base) + 4, 4,
&dev->eq_regs.arbel.eq_arm)) { &dev->eq_regs.arbel.eq_arm)) {
mthca_err(dev, "Couldn't map interrupt clear register, " mthca_err(dev, "Couldn't map EQ arm register, aborting.\n");
"aborting.\n");
mthca_unmap_reg(dev, (pci_resource_len(dev->pdev, 0) - 1) & mthca_unmap_reg(dev, (pci_resource_len(dev->pdev, 0) - 1) &
dev->fw.arbel.clr_int_base, MTHCA_CLR_INT_SIZE, dev->fw.arbel.clr_int_base, MTHCA_CLR_INT_SIZE,
dev->clr_base); dev->clr_base);
...@@ -721,8 +718,7 @@ static int __devinit mthca_map_eq_regs(struct mthca_dev *dev) ...@@ -721,8 +718,7 @@ static int __devinit mthca_map_eq_regs(struct mthca_dev *dev)
dev->fw.arbel.eq_set_ci_base, dev->fw.arbel.eq_set_ci_base,
MTHCA_EQ_SET_CI_SIZE, MTHCA_EQ_SET_CI_SIZE,
&dev->eq_regs.arbel.eq_set_ci_base)) { &dev->eq_regs.arbel.eq_set_ci_base)) {
mthca_err(dev, "Couldn't map interrupt clear register, " mthca_err(dev, "Couldn't map EQ CI register, aborting.\n");
"aborting.\n");
mthca_unmap_reg(dev, ((pci_resource_len(dev->pdev, 0) - 1) & mthca_unmap_reg(dev, ((pci_resource_len(dev->pdev, 0) - 1) &
dev->fw.arbel.eq_arm_base) + 4, 4, dev->fw.arbel.eq_arm_base) + 4, 4,
dev->eq_regs.arbel.eq_arm); dev->eq_regs.arbel.eq_arm);
......
此差异已折叠。
此差异已折叠。
...@@ -5,9 +5,7 @@ ...@@ -5,9 +5,7 @@
# Each configuration option enables a list of files. # Each configuration option enables a list of files.
obj-$(CONFIG_GAMEPORT) += gameport.o obj-$(CONFIG_GAMEPORT) += gameport.o
obj-$(CONFIG_GAMEPORT_CS461X) += cs461x.o
obj-$(CONFIG_GAMEPORT_EMU10K1) += emu10k1-gp.o obj-$(CONFIG_GAMEPORT_EMU10K1) += emu10k1-gp.o
obj-$(CONFIG_GAMEPORT_FM801) += fm801-gp.o obj-$(CONFIG_GAMEPORT_FM801) += fm801-gp.o
obj-$(CONFIG_GAMEPORT_L4) += lightning.o obj-$(CONFIG_GAMEPORT_L4) += lightning.o
obj-$(CONFIG_GAMEPORT_NS558) += ns558.o obj-$(CONFIG_GAMEPORT_NS558) += ns558.o
obj-$(CONFIG_GAMEPORT_VORTEX) += vortex.o
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册