提交 80c55208 编写于 作者: T Thomas Gleixner

Merge branch 'cpus4096' into irq/threaded

Conflicts:
	arch/parisc/kernel/irq.c
	kernel/irq/handle.c
Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
...@@ -18,11 +18,11 @@ For an architecture to support this feature, it must define some of ...@@ -18,11 +18,11 @@ For an architecture to support this feature, it must define some of
these macros in include/asm-XXX/topology.h: these macros in include/asm-XXX/topology.h:
#define topology_physical_package_id(cpu) #define topology_physical_package_id(cpu)
#define topology_core_id(cpu) #define topology_core_id(cpu)
#define topology_thread_siblings(cpu) #define topology_thread_cpumask(cpu)
#define topology_core_siblings(cpu) #define topology_core_cpumask(cpu)
The type of **_id is int. The type of **_id is int.
The type of siblings is cpumask_t. The type of siblings is (const) struct cpumask *.
To be consistent on all architectures, include/linux/topology.h To be consistent on all architectures, include/linux/topology.h
provides default definitions for any of the above macros that are provides default definitions for any of the above macros that are
......
...@@ -1310,8 +1310,13 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -1310,8 +1310,13 @@ and is between 256 and 4096 characters. It is defined in the file
memtest= [KNL,X86] Enable memtest memtest= [KNL,X86] Enable memtest
Format: <integer> Format: <integer>
range: 0,4 : pattern number
default : 0 <disable> default : 0 <disable>
Specifies the number of memtest passes to be
performed. Each pass selects another test
pattern from a given set of patterns. Memtest
fills the memory with this pattern, validates
memory contents and reserves bad memory
regions that are detected.
meye.*= [HW] Set MotionEye Camera parameters meye.*= [HW] Set MotionEye Camera parameters
See Documentation/video4linux/meye.txt. See Documentation/video4linux/meye.txt.
......
...@@ -158,7 +158,7 @@ Offset Proto Name Meaning ...@@ -158,7 +158,7 @@ Offset Proto Name Meaning
0202/4 2.00+ header Magic signature "HdrS" 0202/4 2.00+ header Magic signature "HdrS"
0206/2 2.00+ version Boot protocol version supported 0206/2 2.00+ version Boot protocol version supported
0208/4 2.00+ realmode_swtch Boot loader hook (see below) 0208/4 2.00+ realmode_swtch Boot loader hook (see below)
020C/2 2.00+ start_sys The load-low segment (0x1000) (obsolete) 020C/2 2.00+ start_sys_seg The load-low segment (0x1000) (obsolete)
020E/2 2.00+ kernel_version Pointer to kernel version string 020E/2 2.00+ kernel_version Pointer to kernel version string
0210/1 2.00+ type_of_loader Boot loader identifier 0210/1 2.00+ type_of_loader Boot loader identifier
0211/1 2.00+ loadflags Boot protocol option flags 0211/1 2.00+ loadflags Boot protocol option flags
...@@ -170,10 +170,11 @@ Offset Proto Name Meaning ...@@ -170,10 +170,11 @@ Offset Proto Name Meaning
0224/2 2.01+ heap_end_ptr Free memory after setup end 0224/2 2.01+ heap_end_ptr Free memory after setup end
0226/2 N/A pad1 Unused 0226/2 N/A pad1 Unused
0228/4 2.02+ cmd_line_ptr 32-bit pointer to the kernel command line 0228/4 2.02+ cmd_line_ptr 32-bit pointer to the kernel command line
022C/4 2.03+ initrd_addr_max Highest legal initrd address 022C/4 2.03+ ramdisk_max Highest legal initrd address
0230/4 2.05+ kernel_alignment Physical addr alignment required for kernel 0230/4 2.05+ kernel_alignment Physical addr alignment required for kernel
0234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not 0234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not
0235/3 N/A pad2 Unused 0235/1 N/A pad2 Unused
0236/2 N/A pad3 Unused
0238/4 2.06+ cmdline_size Maximum size of the kernel command line 0238/4 2.06+ cmdline_size Maximum size of the kernel command line
023C/4 2.07+ hardware_subarch Hardware subarchitecture 023C/4 2.07+ hardware_subarch Hardware subarchitecture
0240/8 2.07+ hardware_subarch_data Subarchitecture-specific data 0240/8 2.07+ hardware_subarch_data Subarchitecture-specific data
...@@ -299,14 +300,14 @@ Protocol: 2.00+ ...@@ -299,14 +300,14 @@ Protocol: 2.00+
e.g. 0x0204 for version 2.04, and 0x0a11 for a hypothetical version e.g. 0x0204 for version 2.04, and 0x0a11 for a hypothetical version
10.17. 10.17.
Field name: readmode_swtch Field name: realmode_swtch
Type: modify (optional) Type: modify (optional)
Offset/size: 0x208/4 Offset/size: 0x208/4
Protocol: 2.00+ Protocol: 2.00+
Boot loader hook (see ADVANCED BOOT LOADER HOOKS below.) Boot loader hook (see ADVANCED BOOT LOADER HOOKS below.)
Field name: start_sys Field name: start_sys_seg
Type: read Type: read
Offset/size: 0x20c/2 Offset/size: 0x20c/2
Protocol: 2.00+ Protocol: 2.00+
...@@ -468,7 +469,7 @@ Protocol: 2.02+ ...@@ -468,7 +469,7 @@ Protocol: 2.02+
zero, the kernel will assume that your boot loader does not support zero, the kernel will assume that your boot loader does not support
the 2.02+ protocol. the 2.02+ protocol.
Field name: initrd_addr_max Field name: ramdisk_max
Type: read Type: read
Offset/size: 0x22c/4 Offset/size: 0x22c/4
Protocol: 2.03+ Protocol: 2.03+
...@@ -542,7 +543,10 @@ Protocol: 2.08+ ...@@ -542,7 +543,10 @@ Protocol: 2.08+
The payload may be compressed. The format of both the compressed and The payload may be compressed. The format of both the compressed and
uncompressed data should be determined using the standard magic uncompressed data should be determined using the standard magic
numbers. Currently only gzip compressed ELF is used. numbers. The currently supported compression formats are gzip
(magic numbers 1F 8B or 1F 9E), bzip2 (magic number 42 5A) and LZMA
(magic number 5D 00). The uncompressed payload is currently always ELF
(magic number 7F 45 4C 46).
Field name: payload_length Field name: payload_length
Type: read Type: read
......
Mini-HOWTO for using the earlyprintk=dbgp boot option with a
USB2 Debug port key and a debug cable, on x86 systems.
You need two computers, the 'USB debug key' special gadget and
and two USB cables, connected like this:
[host/target] <-------> [USB debug key] <-------> [client/console]
1. There are three specific hardware requirements:
a.) Host/target system needs to have USB debug port capability.
You can check this capability by looking at a 'Debug port' bit in
the lspci -vvv output:
# lspci -vvv
...
00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 03) (prog-if 20 [EHCI])
Subsystem: Lenovo ThinkPad T61
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin D routed to IRQ 19
Region 0: Memory at fe227000 (32-bit, non-prefetchable) [size=1K]
Capabilities: [50] Power Management version 2
Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 PME-Enable- DSel=0 DScale=0 PME+
Capabilities: [58] Debug port: BAR=1 offset=00a0
^^^^^^^^^^^ <==================== [ HERE ]
Kernel driver in use: ehci_hcd
Kernel modules: ehci-hcd
...
( If your system does not list a debug port capability then you probably
wont be able to use the USB debug key. )
b.) You also need a Netchip USB debug cable/key:
http://www.plxtech.com/products/NET2000/NET20DC/default.asp
This is a small blue plastic connector with two USB connections,
it draws power from its USB connections.
c.) Thirdly, you need a second client/console system with a regular USB port.
2. Software requirements:
a.) On the host/target system:
You need to enable the following kernel config option:
CONFIG_EARLY_PRINTK_DBGP=y
And you need to add the boot command line: "earlyprintk=dbgp".
(If you are using Grub, append it to the 'kernel' line in
/etc/grub.conf)
NOTE: normally earlyprintk console gets turned off once the
regular console is alive - use "earlyprintk=dbgp,keep" to keep
this channel open beyond early bootup. This can be useful for
debugging crashes under Xorg, etc.
b.) On the client/console system:
You should enable the following kernel config option:
CONFIG_USB_SERIAL_DEBUG=y
On the next bootup with the modified kernel you should
get a /dev/ttyUSBx device(s).
Now this channel of kernel messages is ready to be used: start
your favorite terminal emulator (minicom, etc.) and set
it up to use /dev/ttyUSB0 - or use a raw 'cat /dev/ttyUSBx' to
see the raw output.
c.) On Nvidia Southbridge based systems: the kernel will try to probe
and find out which port has debug device connected.
3. Testing that it works fine:
You can test the output by using earlyprintk=dbgp,keep and provoking
kernel messages on the host/target system. You can provoke a harmless
kernel message by for example doing:
echo h > /proc/sysrq-trigger
On the host/target system you should see this help line in "dmesg" output:
SysRq : HELP : loglevel(0-9) reBoot Crashdump terminate-all-tasks(E) memory-full-oom-kill(F) kill-all-tasks(I) saK show-backtrace-all-active-cpus(L) show-memory-usage(M) nice-all-RT-tasks(N) powerOff show-registers(P) show-all-timers(Q) unRaw Sync show-task-states(T) Unmount show-blocked-tasks(W) dump-ftrace-buffer(Z)
On the client/console system do:
cat /dev/ttyUSB0
And you should see the help line above displayed shortly after you've
provoked it on the host system.
If it does not work then please ask about it on the linux-kernel@vger.kernel.org
mailing list or contact the x86 maintainers.
...@@ -533,8 +533,9 @@ KBUILD_CFLAGS += $(call cc-option,-Wframe-larger-than=${CONFIG_FRAME_WARN}) ...@@ -533,8 +533,9 @@ KBUILD_CFLAGS += $(call cc-option,-Wframe-larger-than=${CONFIG_FRAME_WARN})
endif endif
# Force gcc to behave correct even for buggy distributions # Force gcc to behave correct even for buggy distributions
# Arch Makefiles may override this setting ifndef CONFIG_CC_STACKPROTECTOR
KBUILD_CFLAGS += $(call cc-option, -fno-stack-protector) KBUILD_CFLAGS += $(call cc-option, -fno-stack-protector)
endif
ifdef CONFIG_FRAME_POINTER ifdef CONFIG_FRAME_POINTER
KBUILD_CFLAGS += -fno-omit-frame-pointer -fno-optimize-sibling-calls KBUILD_CFLAGS += -fno-omit-frame-pointer -fno-optimize-sibling-calls
......
#ifndef _ALPHA_STATFS_H #ifndef _ALPHA_STATFS_H
#define _ALPHA_STATFS_H #define _ALPHA_STATFS_H
#include <linux/types.h>
/* Alpha is the only 64-bit platform with 32-bit statfs. And doesn't /* Alpha is the only 64-bit platform with 32-bit statfs. And doesn't
even seem to implement statfs64 */ even seem to implement statfs64 */
#define __statfs_word __u32 #define __statfs_word __u32
......
#ifndef _ALPHA_SWAB_H #ifndef _ALPHA_SWAB_H
#define _ALPHA_SWAB_H #define _ALPHA_SWAB_H
#include <asm/types.h> #include <linux/types.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#include <asm/compiler.h> #include <asm/compiler.h>
......
...@@ -55,7 +55,7 @@ int irq_select_affinity(unsigned int irq) ...@@ -55,7 +55,7 @@ int irq_select_affinity(unsigned int irq)
cpu = (cpu < (NR_CPUS-1) ? cpu + 1 : 0); cpu = (cpu < (NR_CPUS-1) ? cpu + 1 : 0);
last_cpu = cpu; last_cpu = cpu;
irq_desc[irq].affinity = cpumask_of_cpu(cpu); cpumask_copy(irq_desc[irq].affinity, cpumask_of(cpu));
irq_desc[irq].chip->set_affinity(irq, cpumask_of(cpu)); irq_desc[irq].chip->set_affinity(irq, cpumask_of(cpu));
return 0; return 0;
} }
......
...@@ -189,9 +189,21 @@ callback_init(void * kernel_end) ...@@ -189,9 +189,21 @@ callback_init(void * kernel_end)
if (alpha_using_srm) { if (alpha_using_srm) {
static struct vm_struct console_remap_vm; static struct vm_struct console_remap_vm;
unsigned long vaddr = VMALLOC_START; unsigned long nr_pages = 0;
unsigned long vaddr;
unsigned long i, j; unsigned long i, j;
/* calculate needed size */
for (i = 0; i < crb->map_entries; ++i)
nr_pages += crb->map[i].count;
/* register the vm area */
console_remap_vm.flags = VM_ALLOC;
console_remap_vm.size = nr_pages << PAGE_SHIFT;
vm_area_register_early(&console_remap_vm, PAGE_SIZE);
vaddr = (unsigned long)console_remap_vm.addr;
/* Set up the third level PTEs and update the virtual /* Set up the third level PTEs and update the virtual
addresses of the CRB entries. */ addresses of the CRB entries. */
for (i = 0; i < crb->map_entries; ++i) { for (i = 0; i < crb->map_entries; ++i) {
...@@ -213,12 +225,6 @@ callback_init(void * kernel_end) ...@@ -213,12 +225,6 @@ callback_init(void * kernel_end)
vaddr += PAGE_SIZE; vaddr += PAGE_SIZE;
} }
} }
/* Let vmalloc know that we've allocated some space. */
console_remap_vm.flags = VM_ALLOC;
console_remap_vm.addr = (void *) VMALLOC_START;
console_remap_vm.size = vaddr - VMALLOC_START;
vmlist = &console_remap_vm;
} }
callback_init_done = 1; callback_init_done = 1;
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
#define __ARM_A_OUT_H__ #define __ARM_A_OUT_H__
#include <linux/personality.h> #include <linux/personality.h>
#include <asm/types.h> #include <linux/types.h>
struct exec struct exec
{ {
......
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
#ifndef __ASMARM_SETUP_H #ifndef __ASMARM_SETUP_H
#define __ASMARM_SETUP_H #define __ASMARM_SETUP_H
#include <asm/types.h> #include <linux/types.h>
#define COMMAND_LINE_SIZE 1024 #define COMMAND_LINE_SIZE 1024
......
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
#define __ASM_ARM_SWAB_H #define __ASM_ARM_SWAB_H
#include <linux/compiler.h> #include <linux/compiler.h>
#include <asm/types.h> #include <linux/types.h>
#if !defined(__STRICT_ANSI__) || defined(__KERNEL__) #if !defined(__STRICT_ANSI__) || defined(__KERNEL__)
# define __SWAB_64_THRU_32__ # define __SWAB_64_THRU_32__
......
...@@ -104,6 +104,11 @@ static struct irq_desc bad_irq_desc = { ...@@ -104,6 +104,11 @@ static struct irq_desc bad_irq_desc = {
.lock = __SPIN_LOCK_UNLOCKED(bad_irq_desc.lock), .lock = __SPIN_LOCK_UNLOCKED(bad_irq_desc.lock),
}; };
#ifdef CONFIG_CPUMASK_OFFSTACK
/* We are not allocating bad_irq_desc.affinity or .pending_mask */
#error "ARM architecture does not support CONFIG_CPUMASK_OFFSTACK."
#endif
/* /*
* do_IRQ handles all hardware IRQ's. Decoded IRQs should not * do_IRQ handles all hardware IRQ's. Decoded IRQs should not
* come via this function. Instead, they should provide their * come via this function. Instead, they should provide their
...@@ -161,7 +166,7 @@ void __init init_IRQ(void) ...@@ -161,7 +166,7 @@ void __init init_IRQ(void)
irq_desc[irq].status |= IRQ_NOREQUEST | IRQ_NOPROBE; irq_desc[irq].status |= IRQ_NOREQUEST | IRQ_NOPROBE;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
bad_irq_desc.affinity = CPU_MASK_ALL; cpumask_setall(bad_irq_desc.affinity);
bad_irq_desc.cpu = smp_processor_id(); bad_irq_desc.cpu = smp_processor_id();
#endif #endif
init_arch_irq(); init_arch_irq();
...@@ -191,15 +196,16 @@ void migrate_irqs(void) ...@@ -191,15 +196,16 @@ void migrate_irqs(void)
struct irq_desc *desc = irq_desc + i; struct irq_desc *desc = irq_desc + i;
if (desc->cpu == cpu) { if (desc->cpu == cpu) {
unsigned int newcpu = any_online_cpu(desc->affinity); unsigned int newcpu = cpumask_any_and(desc->affinity,
cpu_online_mask);
if (newcpu == NR_CPUS) { if (newcpu >= nr_cpu_ids) {
if (printk_ratelimit()) if (printk_ratelimit())
printk(KERN_INFO "IRQ%u no longer affine to CPU%u\n", printk(KERN_INFO "IRQ%u no longer affine to CPU%u\n",
i, cpu); i, cpu);
cpus_setall(desc->affinity); cpumask_setall(desc->affinity);
newcpu = any_online_cpu(desc->affinity); newcpu = cpumask_any_and(desc->affinity,
cpu_online_mask);
} }
route_irq(desc, i, newcpu); route_irq(desc, i, newcpu);
......
...@@ -65,6 +65,7 @@ SECTIONS ...@@ -65,6 +65,7 @@ SECTIONS
#endif #endif
. = ALIGN(4096); . = ALIGN(4096);
__per_cpu_start = .; __per_cpu_start = .;
*(.data.percpu.page_aligned)
*(.data.percpu) *(.data.percpu)
*(.data.percpu.shared_aligned) *(.data.percpu.shared_aligned)
__per_cpu_end = .; __per_cpu_end = .;
......
...@@ -263,7 +263,7 @@ static void em_route_irq(int irq, unsigned int cpu) ...@@ -263,7 +263,7 @@ static void em_route_irq(int irq, unsigned int cpu)
const struct cpumask *mask = cpumask_of(cpu); const struct cpumask *mask = cpumask_of(cpu);
spin_lock_irq(&desc->lock); spin_lock_irq(&desc->lock);
desc->affinity = *mask; cpumask_copy(desc->affinity, mask);
desc->chip->set_affinity(irq, mask); desc->chip->set_affinity(irq, mask);
spin_unlock_irq(&desc->lock); spin_unlock_irq(&desc->lock);
} }
......
...@@ -181,7 +181,7 @@ source "kernel/Kconfig.preempt" ...@@ -181,7 +181,7 @@ source "kernel/Kconfig.preempt"
config QUICKLIST config QUICKLIST
def_bool y def_bool y
config HAVE_ARCH_BOOTMEM_NODE config HAVE_ARCH_BOOTMEM
def_bool n def_bool n
config ARCH_HAVE_MEMORY_PRESENT config ARCH_HAVE_MEMORY_PRESENT
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
#ifndef __ASM_AVR32_SWAB_H #ifndef __ASM_AVR32_SWAB_H
#define __ASM_AVR32_SWAB_H #define __ASM_AVR32_SWAB_H
#include <asm/types.h> #include <linux/types.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#define __SWAB_64_THRU_32__ #define __SWAB_64_THRU_32__
......
...@@ -3,14 +3,4 @@ ...@@ -3,14 +3,4 @@
#include <asm-generic/percpu.h> #include <asm-generic/percpu.h>
#ifdef CONFIG_MODULES
#define PERCPU_MODULE_RESERVE 8192
#else
#define PERCPU_MODULE_RESERVE 0
#endif
#define PERCPU_ENOUGH_ROOM \
(ALIGN(__per_cpu_end - __per_cpu_start, SMP_CACHE_BYTES) + \
PERCPU_MODULE_RESERVE)
#endif /* __ARCH_BLACKFIN_PERCPU__ */ #endif /* __ARCH_BLACKFIN_PERCPU__ */
#ifndef _BLACKFIN_SWAB_H #ifndef _BLACKFIN_SWAB_H
#define _BLACKFIN_SWAB_H #define _BLACKFIN_SWAB_H
#include <asm/types.h> #include <linux/types.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#if defined(__GNUC__) && !defined(__STRICT_ANSI__) || defined(__KERNEL__) #if defined(__GNUC__) && !defined(__STRICT_ANSI__) || defined(__KERNEL__)
......
...@@ -70,6 +70,11 @@ static struct irq_desc bad_irq_desc = { ...@@ -70,6 +70,11 @@ static struct irq_desc bad_irq_desc = {
#endif #endif
}; };
#ifdef CONFIG_CPUMASK_OFFSTACK
/* We are not allocating a variable-sized bad_irq_desc.affinity */
#error "Blackfin architecture does not support CONFIG_CPUMASK_OFFSTACK."
#endif
int show_interrupts(struct seq_file *p, void *v) int show_interrupts(struct seq_file *p, void *v)
{ {
int i = *(loff_t *) v, j; int i = *(loff_t *) v, j;
......
#ifndef _H8300_SWAB_H #ifndef _H8300_SWAB_H
#define _H8300_SWAB_H #define _H8300_SWAB_H
#include <asm/types.h> #include <linux/types.h>
#if defined(__GNUC__) && !defined(__STRICT_ANSI__) || defined(__KERNEL__) #if defined(__GNUC__) && !defined(__STRICT_ANSI__) || defined(__KERNEL__)
# define __SWAB_64_THRU_32__ # define __SWAB_64_THRU_32__
......
...@@ -6,8 +6,6 @@ ...@@ -6,8 +6,6 @@
* David Mosberger-Tang <davidm@hpl.hp.com> * David Mosberger-Tang <davidm@hpl.hp.com>
*/ */
#include <asm/types.h>
/* floating point status register: */ /* floating point status register: */
#define FPSR_TRAP_VD (1 << 0) /* invalid op trap disabled */ #define FPSR_TRAP_VD (1 << 0) /* invalid op trap disabled */
#define FPSR_TRAP_DD (1 << 1) /* denormal trap disabled */ #define FPSR_TRAP_DD (1 << 1) /* denormal trap disabled */
......
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
* Copyright (C) 2002,2003 Suresh Siddha <suresh.b.siddha@intel.com> * Copyright (C) 2002,2003 Suresh Siddha <suresh.b.siddha@intel.com>
*/ */
#include <linux/types.h>
#include <linux/compiler.h> #include <linux/compiler.h>
/* define this macro to get some asm stmts included in 'c' files */ /* define this macro to get some asm stmts included in 'c' files */
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/types.h>
/* include compiler specific intrinsics */ /* include compiler specific intrinsics */
#include <asm/ia64regs.h> #include <asm/ia64regs.h>
#ifdef __INTEL_COMPILER #ifdef __INTEL_COMPILER
......
...@@ -21,8 +21,7 @@ ...@@ -21,8 +21,7 @@
* *
*/ */
#include <asm/types.h> #include <linux/types.h>
#include <linux/ioctl.h> #include <linux/ioctl.h>
/* Select x86 specific features in <linux/kvm.h> */ /* Select x86 specific features in <linux/kvm.h> */
......
...@@ -27,12 +27,12 @@ extern void *per_cpu_init(void); ...@@ -27,12 +27,12 @@ extern void *per_cpu_init(void);
#else /* ! SMP */ #else /* ! SMP */
#define PER_CPU_ATTRIBUTES __attribute__((__section__(".data.percpu")))
#define per_cpu_init() (__phys_per_cpu_start) #define per_cpu_init() (__phys_per_cpu_start)
#endif /* SMP */ #endif /* SMP */
#define PER_CPU_BASE_SECTION ".data.percpu"
/* /*
* Be extremely careful when taking the address of this variable! Due to virtual * Be extremely careful when taking the address of this variable! Due to virtual
* remapping, it is different from the canonical address returned by __get_cpu_var(var)! * remapping, it is different from the canonical address returned by __get_cpu_var(var)!
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
* David Mosberger-Tang <davidm@hpl.hp.com>, Hewlett-Packard Co. * David Mosberger-Tang <davidm@hpl.hp.com>, Hewlett-Packard Co.
*/ */
#include <asm/types.h> #include <linux/types.h>
#include <asm/intrinsics.h> #include <asm/intrinsics.h>
#include <linux/compiler.h> #include <linux/compiler.h>
......
...@@ -84,7 +84,7 @@ void build_cpu_to_node_map(void); ...@@ -84,7 +84,7 @@ void build_cpu_to_node_map(void);
.child = NULL, \ .child = NULL, \
.groups = NULL, \ .groups = NULL, \
.min_interval = 8, \ .min_interval = 8, \
.max_interval = 8*(min(num_online_cpus(), 32)), \ .max_interval = 8*(min(num_online_cpus(), 32U)), \
.busy_factor = 64, \ .busy_factor = 64, \
.imbalance_pct = 125, \ .imbalance_pct = 125, \
.cache_nice_tries = 2, \ .cache_nice_tries = 2, \
......
#ifndef _ASM_IA64_UV_UV_H
#define _ASM_IA64_UV_UV_H
#include <asm/system.h>
#include <asm/sn/simulator.h>
static inline int is_uv_system(void)
{
/* temporary support for running on hardware simulator */
return IS_MEDUSA() || ia64_platform_is("uv");
}
#endif /* _ASM_IA64_UV_UV_H */
...@@ -199,6 +199,10 @@ char *__init __acpi_map_table(unsigned long phys_addr, unsigned long size) ...@@ -199,6 +199,10 @@ char *__init __acpi_map_table(unsigned long phys_addr, unsigned long size)
return __va(phys_addr); return __va(phys_addr);
} }
void __init __acpi_unmap_table(char *map, unsigned long size)
{
}
/* -------------------------------------------------------------------------- /* --------------------------------------------------------------------------
Boot-time Table Parsing Boot-time Table Parsing
-------------------------------------------------------------------------- */ -------------------------------------------------------------------------- */
......
...@@ -880,7 +880,7 @@ iosapic_unregister_intr (unsigned int gsi) ...@@ -880,7 +880,7 @@ iosapic_unregister_intr (unsigned int gsi)
if (iosapic_intr_info[irq].count == 0) { if (iosapic_intr_info[irq].count == 0) {
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/* Clear affinity */ /* Clear affinity */
cpus_setall(idesc->affinity); cpumask_setall(idesc->affinity);
#endif #endif
/* Clear the interrupt information */ /* Clear the interrupt information */
iosapic_intr_info[irq].dest = 0; iosapic_intr_info[irq].dest = 0;
......
...@@ -103,7 +103,7 @@ static char irq_redir [NR_IRQS]; // = { [0 ... NR_IRQS-1] = 1 }; ...@@ -103,7 +103,7 @@ static char irq_redir [NR_IRQS]; // = { [0 ... NR_IRQS-1] = 1 };
void set_irq_affinity_info (unsigned int irq, int hwid, int redir) void set_irq_affinity_info (unsigned int irq, int hwid, int redir)
{ {
if (irq < NR_IRQS) { if (irq < NR_IRQS) {
cpumask_copy(&irq_desc[irq].affinity, cpumask_copy(irq_desc[irq].affinity,
cpumask_of(cpu_logical_id(hwid))); cpumask_of(cpu_logical_id(hwid)));
irq_redir[irq] = (char) (redir & 0xff); irq_redir[irq] = (char) (redir & 0xff);
} }
...@@ -148,7 +148,7 @@ static void migrate_irqs(void) ...@@ -148,7 +148,7 @@ static void migrate_irqs(void)
if (desc->status == IRQ_PER_CPU) if (desc->status == IRQ_PER_CPU)
continue; continue;
if (cpumask_any_and(&irq_desc[irq].affinity, cpu_online_mask) if (cpumask_any_and(irq_desc[irq].affinity, cpu_online_mask)
>= nr_cpu_ids) { >= nr_cpu_ids) {
/* /*
* Save it for phase 2 processing * Save it for phase 2 processing
......
...@@ -493,11 +493,13 @@ ia64_handle_irq (ia64_vector vector, struct pt_regs *regs) ...@@ -493,11 +493,13 @@ ia64_handle_irq (ia64_vector vector, struct pt_regs *regs)
saved_tpr = ia64_getreg(_IA64_REG_CR_TPR); saved_tpr = ia64_getreg(_IA64_REG_CR_TPR);
ia64_srlz_d(); ia64_srlz_d();
while (vector != IA64_SPURIOUS_INT_VECTOR) { while (vector != IA64_SPURIOUS_INT_VECTOR) {
struct irq_desc *desc = irq_to_desc(vector);
if (unlikely(IS_LOCAL_TLB_FLUSH(vector))) { if (unlikely(IS_LOCAL_TLB_FLUSH(vector))) {
smp_local_flush_tlb(); smp_local_flush_tlb();
kstat_this_cpu.irqs[vector]++; kstat_incr_irqs_this_cpu(vector, desc);
} else if (unlikely(IS_RESCHEDULE(vector))) } else if (unlikely(IS_RESCHEDULE(vector)))
kstat_this_cpu.irqs[vector]++; kstat_incr_irqs_this_cpu(vector, desc);
else { else {
int irq = local_vector_to_irq(vector); int irq = local_vector_to_irq(vector);
...@@ -551,11 +553,13 @@ void ia64_process_pending_intr(void) ...@@ -551,11 +553,13 @@ void ia64_process_pending_intr(void)
* Perform normal interrupt style processing * Perform normal interrupt style processing
*/ */
while (vector != IA64_SPURIOUS_INT_VECTOR) { while (vector != IA64_SPURIOUS_INT_VECTOR) {
struct irq_desc *desc = irq_to_desc(vector);
if (unlikely(IS_LOCAL_TLB_FLUSH(vector))) { if (unlikely(IS_LOCAL_TLB_FLUSH(vector))) {
smp_local_flush_tlb(); smp_local_flush_tlb();
kstat_this_cpu.irqs[vector]++; kstat_incr_irqs_this_cpu(vector, desc);
} else if (unlikely(IS_RESCHEDULE(vector))) } else if (unlikely(IS_RESCHEDULE(vector)))
kstat_this_cpu.irqs[vector]++; kstat_incr_irqs_this_cpu(vector, desc);
else { else {
struct pt_regs *old_regs = set_irq_regs(NULL); struct pt_regs *old_regs = set_irq_regs(NULL);
int irq = local_vector_to_irq(vector); int irq = local_vector_to_irq(vector);
......
...@@ -75,7 +75,7 @@ static void ia64_set_msi_irq_affinity(unsigned int irq, ...@@ -75,7 +75,7 @@ static void ia64_set_msi_irq_affinity(unsigned int irq,
msg.data = data; msg.data = data;
write_msi_msg(irq, &msg); write_msi_msg(irq, &msg);
irq_desc[irq].affinity = cpumask_of_cpu(cpu); cpumask_copy(irq_desc[irq].affinity, cpumask_of(cpu));
} }
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
...@@ -187,7 +187,7 @@ static void dmar_msi_set_affinity(unsigned int irq, const struct cpumask *mask) ...@@ -187,7 +187,7 @@ static void dmar_msi_set_affinity(unsigned int irq, const struct cpumask *mask)
msg.address_lo |= MSI_ADDR_DESTID_CPU(cpu_physical_id(cpu)); msg.address_lo |= MSI_ADDR_DESTID_CPU(cpu_physical_id(cpu));
dmar_msi_write(irq, &msg); dmar_msi_write(irq, &msg);
irq_desc[irq].affinity = *mask; cpumask_copy(irq_desc[irq].affinity, mask);
} }
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
......
...@@ -219,6 +219,7 @@ SECTIONS ...@@ -219,6 +219,7 @@ SECTIONS
.data.percpu PERCPU_ADDR : AT(__phys_per_cpu_start - LOAD_OFFSET) .data.percpu PERCPU_ADDR : AT(__phys_per_cpu_start - LOAD_OFFSET)
{ {
__per_cpu_start = .; __per_cpu_start = .;
*(.data.percpu.page_aligned)
*(.data.percpu) *(.data.percpu)
*(.data.percpu.shared_aligned) *(.data.percpu.shared_aligned)
__per_cpu_end = .; __per_cpu_end = .;
......
...@@ -205,7 +205,7 @@ static void sn_set_msi_irq_affinity(unsigned int irq, ...@@ -205,7 +205,7 @@ static void sn_set_msi_irq_affinity(unsigned int irq,
msg.address_lo = (u32)(bus_addr & 0x00000000ffffffff); msg.address_lo = (u32)(bus_addr & 0x00000000ffffffff);
write_msi_msg(irq, &msg); write_msi_msg(irq, &msg);
irq_desc[irq].affinity = *cpu_mask; cpumask_copy(irq_desc[irq].affinity, cpu_mask);
} }
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
......
...@@ -66,7 +66,7 @@ extern void smtc_forward_irq(unsigned int irq); ...@@ -66,7 +66,7 @@ extern void smtc_forward_irq(unsigned int irq);
*/ */
#define IRQ_AFFINITY_HOOK(irq) \ #define IRQ_AFFINITY_HOOK(irq) \
do { \ do { \
if (!cpu_isset(smp_processor_id(), irq_desc[irq].affinity)) { \ if (!cpumask_test_cpu(smp_processor_id(), irq_desc[irq].affinity)) {\
smtc_forward_irq(irq); \ smtc_forward_irq(irq); \
irq_exit(); \ irq_exit(); \
return; \ return; \
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#ifndef _ASM_SIGCONTEXT_H #ifndef _ASM_SIGCONTEXT_H
#define _ASM_SIGCONTEXT_H #define _ASM_SIGCONTEXT_H
#include <linux/types.h>
#include <asm/sgidefs.h> #include <asm/sgidefs.h>
#if _MIPS_SIM == _MIPS_SIM_ABI32 #if _MIPS_SIM == _MIPS_SIM_ABI32
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
#define _ASM_SWAB_H #define _ASM_SWAB_H
#include <linux/compiler.h> #include <linux/compiler.h>
#include <asm/types.h> #include <linux/types.h>
#define __SWAB_64_THRU_32__ #define __SWAB_64_THRU_32__
......
...@@ -187,7 +187,7 @@ static void gic_set_affinity(unsigned int irq, const struct cpumask *cpumask) ...@@ -187,7 +187,7 @@ static void gic_set_affinity(unsigned int irq, const struct cpumask *cpumask)
set_bit(irq, pcpu_masks[first_cpu(tmp)].pcpu_mask); set_bit(irq, pcpu_masks[first_cpu(tmp)].pcpu_mask);
} }
irq_desc[irq].affinity = *cpumask; cpumask_copy(irq_desc[irq].affinity, cpumask);
spin_unlock_irqrestore(&gic_lock, flags); spin_unlock_irqrestore(&gic_lock, flags);
} }
......
...@@ -686,7 +686,7 @@ void smtc_forward_irq(unsigned int irq) ...@@ -686,7 +686,7 @@ void smtc_forward_irq(unsigned int irq)
* and efficiency, we just pick the easiest one to find. * and efficiency, we just pick the easiest one to find.
*/ */
target = first_cpu(irq_desc[irq].affinity); target = cpumask_first(irq_desc[irq].affinity);
/* /*
* We depend on the platform code to have correctly processed * We depend on the platform code to have correctly processed
...@@ -921,11 +921,13 @@ void ipi_decode(struct smtc_ipi *pipi) ...@@ -921,11 +921,13 @@ void ipi_decode(struct smtc_ipi *pipi)
struct clock_event_device *cd; struct clock_event_device *cd;
void *arg_copy = pipi->arg; void *arg_copy = pipi->arg;
int type_copy = pipi->type; int type_copy = pipi->type;
int irq = MIPS_CPU_IRQ_BASE + 1;
smtc_ipi_nq(&freeIPIq, pipi); smtc_ipi_nq(&freeIPIq, pipi);
switch (type_copy) { switch (type_copy) {
case SMTC_CLOCK_TICK: case SMTC_CLOCK_TICK:
irq_enter(); irq_enter();
kstat_this_cpu.irqs[MIPS_CPU_IRQ_BASE + 1]++; kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
cd = &per_cpu(mips_clockevent_device, cpu); cd = &per_cpu(mips_clockevent_device, cpu);
cd->event_handler(cd); cd->event_handler(cd);
irq_exit(); irq_exit();
......
...@@ -116,7 +116,7 @@ struct plat_smp_ops msmtc_smp_ops = { ...@@ -116,7 +116,7 @@ struct plat_smp_ops msmtc_smp_ops = {
void plat_set_irq_affinity(unsigned int irq, const struct cpumask *affinity) void plat_set_irq_affinity(unsigned int irq, const struct cpumask *affinity)
{ {
cpumask_t tmask = *affinity; cpumask_t tmask;
int cpu = 0; int cpu = 0;
void smtc_set_irq_affinity(unsigned int irq, cpumask_t aff); void smtc_set_irq_affinity(unsigned int irq, cpumask_t aff);
...@@ -139,11 +139,12 @@ void plat_set_irq_affinity(unsigned int irq, const struct cpumask *affinity) ...@@ -139,11 +139,12 @@ void plat_set_irq_affinity(unsigned int irq, const struct cpumask *affinity)
* be made to forward to an offline "CPU". * be made to forward to an offline "CPU".
*/ */
cpumask_copy(&tmask, affinity);
for_each_cpu(cpu, affinity) { for_each_cpu(cpu, affinity) {
if ((cpu_data[cpu].vpe_id != 0) || !cpu_online(cpu)) if ((cpu_data[cpu].vpe_id != 0) || !cpu_online(cpu))
cpu_clear(cpu, tmask); cpu_clear(cpu, tmask);
} }
irq_desc[irq].affinity = tmask; cpumask_copy(irq_desc[irq].affinity, &tmask);
if (cpus_empty(tmask)) if (cpus_empty(tmask))
/* /*
......
...@@ -155,7 +155,7 @@ static void indy_buserror_irq(void) ...@@ -155,7 +155,7 @@ static void indy_buserror_irq(void)
int irq = SGI_BUSERR_IRQ; int irq = SGI_BUSERR_IRQ;
irq_enter(); irq_enter();
kstat_this_cpu.irqs[irq]++; kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
ip22_be_interrupt(irq); ip22_be_interrupt(irq);
irq_exit(); irq_exit();
} }
......
...@@ -122,7 +122,7 @@ void indy_8254timer_irq(void) ...@@ -122,7 +122,7 @@ void indy_8254timer_irq(void)
char c; char c;
irq_enter(); irq_enter();
kstat_this_cpu.irqs[irq]++; kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
printk(KERN_ALERT "Oops, got 8254 interrupt.\n"); printk(KERN_ALERT "Oops, got 8254 interrupt.\n");
ArcRead(0, &c, 1, &cnt); ArcRead(0, &c, 1, &cnt);
ArcEnterInteractiveMode(); ArcEnterInteractiveMode();
......
...@@ -178,9 +178,10 @@ struct plat_smp_ops bcm1480_smp_ops = { ...@@ -178,9 +178,10 @@ struct plat_smp_ops bcm1480_smp_ops = {
void bcm1480_mailbox_interrupt(void) void bcm1480_mailbox_interrupt(void)
{ {
int cpu = smp_processor_id(); int cpu = smp_processor_id();
int irq = K_BCM1480_INT_MBOX_0_0;
unsigned int action; unsigned int action;
kstat_this_cpu.irqs[K_BCM1480_INT_MBOX_0_0]++; kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
/* Load the mailbox register to figure out what we're supposed to do */ /* Load the mailbox register to figure out what we're supposed to do */
action = (__raw_readq(mailbox_0_regs[cpu]) >> 48) & 0xffff; action = (__raw_readq(mailbox_0_regs[cpu]) >> 48) & 0xffff;
......
...@@ -166,9 +166,10 @@ struct plat_smp_ops sb_smp_ops = { ...@@ -166,9 +166,10 @@ struct plat_smp_ops sb_smp_ops = {
void sb1250_mailbox_interrupt(void) void sb1250_mailbox_interrupt(void)
{ {
int cpu = smp_processor_id(); int cpu = smp_processor_id();
int irq = K_INT_MBOX_0;
unsigned int action; unsigned int action;
kstat_this_cpu.irqs[K_INT_MBOX_0]++; kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
/* Load the mailbox register to figure out what we're supposed to do */ /* Load the mailbox register to figure out what we're supposed to do */
action = (____raw_readq(mailbox_regs[cpu]) >> 48) & 0xffff; action = (____raw_readq(mailbox_regs[cpu]) >> 48) & 0xffff;
......
...@@ -130,6 +130,7 @@ void watchdog_interrupt(struct pt_regs *regs, enum exception_code excep) ...@@ -130,6 +130,7 @@ void watchdog_interrupt(struct pt_regs *regs, enum exception_code excep)
* the stack NMI-atomically, it's safe to use smp_processor_id(). * the stack NMI-atomically, it's safe to use smp_processor_id().
*/ */
int sum, cpu = smp_processor_id(); int sum, cpu = smp_processor_id();
int irq = NMIIRQ;
u8 wdt, tmp; u8 wdt, tmp;
wdt = WDCTR & ~WDCTR_WDCNE; wdt = WDCTR & ~WDCTR_WDCNE;
...@@ -138,7 +139,7 @@ void watchdog_interrupt(struct pt_regs *regs, enum exception_code excep) ...@@ -138,7 +139,7 @@ void watchdog_interrupt(struct pt_regs *regs, enum exception_code excep)
NMICR = NMICR_WDIF; NMICR = NMICR_WDIF;
nmi_count(cpu)++; nmi_count(cpu)++;
kstat_this_cpu.irqs[NMIIRQ]++; kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
sum = irq_stat[cpu].__irq_count; sum = irq_stat[cpu].__irq_count;
if (last_irq_sums[cpu] == sum) { if (last_irq_sums[cpu] == sum) {
......
...@@ -336,10 +336,11 @@ ...@@ -336,10 +336,11 @@
#define NUM_PDC_RESULT 32 #define NUM_PDC_RESULT 32
#if !defined(__ASSEMBLY__) #if !defined(__ASSEMBLY__)
#ifdef __KERNEL__
#include <linux/types.h> #include <linux/types.h>
#ifdef __KERNEL__
extern int pdc_type; extern int pdc_type;
/* Values for pdc_type */ /* Values for pdc_type */
......
#ifndef _PARISC_SWAB_H #ifndef _PARISC_SWAB_H
#define _PARISC_SWAB_H #define _PARISC_SWAB_H
#include <asm/types.h> #include <linux/types.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#define __SWAB_64_THRU_32__ #define __SWAB_64_THRU_32__
......
...@@ -138,7 +138,7 @@ static void cpu_set_affinity_irq(unsigned int irq, const struct cpumask *dest) ...@@ -138,7 +138,7 @@ static void cpu_set_affinity_irq(unsigned int irq, const struct cpumask *dest)
if (cpu_dest < 0) if (cpu_dest < 0)
return; return;
cpumask_copy(&irq_desc[irq].affinity, &cpumask_of_cpu(cpu_dest)); cpumask_copy(&irq_desc[irq].affinity, dest);
} }
#endif #endif
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
#ifndef __ASM_BOOTX_H__ #ifndef __ASM_BOOTX_H__
#define __ASM_BOOTX_H__ #define __ASM_BOOTX_H__
#include <asm/types.h> #include <linux/types.h>
#ifdef macintosh #ifdef macintosh
#include <Types.h> #include <Types.h>
......
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
#include <asm/string.h> #include <asm/string.h>
#endif #endif
#include <asm/types.h> #include <linux/types.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/cputable.h> #include <asm/cputable.h>
#include <asm/auxvec.h> #include <asm/auxvec.h>
......
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
#ifndef __LINUX_KVM_POWERPC_H #ifndef __LINUX_KVM_POWERPC_H
#define __LINUX_KVM_POWERPC_H #define __LINUX_KVM_POWERPC_H
#include <asm/types.h> #include <linux/types.h>
struct kvm_regs { struct kvm_regs {
__u64 pc; __u64 pc;
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#define _ASM_MMZONE_H_ #define _ASM_MMZONE_H_
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <linux/cpumask.h>
/* /*
* generic non-linear memory support: * generic non-linear memory support:
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
#ifndef _ASM_POWERPC_PS3FB_H_ #ifndef _ASM_POWERPC_PS3FB_H_
#define _ASM_POWERPC_PS3FB_H_ #define _ASM_POWERPC_PS3FB_H_
#include <linux/types.h>
#include <linux/ioctl.h> #include <linux/ioctl.h>
/* ioctl */ /* ioctl */
......
...@@ -23,9 +23,10 @@ ...@@ -23,9 +23,10 @@
#ifndef _SPU_INFO_H #ifndef _SPU_INFO_H
#define _SPU_INFO_H #define _SPU_INFO_H
#include <linux/types.h>
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <asm/spu.h> #include <asm/spu.h>
#include <linux/types.h>
#else #else
struct mfc_cq_sr { struct mfc_cq_sr {
__u64 mfc_cq_data0_RW; __u64 mfc_cq_data0_RW;
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
* 2 of the License, or (at your option) any later version. * 2 of the License, or (at your option) any later version.
*/ */
#include <asm/types.h> #include <linux/types.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#ifdef __GNUC__ #ifdef __GNUC__
......
...@@ -231,7 +231,7 @@ void fixup_irqs(cpumask_t map) ...@@ -231,7 +231,7 @@ void fixup_irqs(cpumask_t map)
if (irq_desc[irq].status & IRQ_PER_CPU) if (irq_desc[irq].status & IRQ_PER_CPU)
continue; continue;
cpus_and(mask, irq_desc[irq].affinity, map); cpumask_and(&mask, irq_desc[irq].affinity, &map);
if (any_online_cpu(mask) == NR_CPUS) { if (any_online_cpu(mask) == NR_CPUS) {
printk("Breaking affinity for irq %i\n", irq); printk("Breaking affinity for irq %i\n", irq);
mask = map; mask = map;
......
...@@ -184,6 +184,7 @@ SECTIONS ...@@ -184,6 +184,7 @@ SECTIONS
. = ALIGN(PAGE_SIZE); . = ALIGN(PAGE_SIZE);
.data.percpu : AT(ADDR(.data.percpu) - LOAD_OFFSET) { .data.percpu : AT(ADDR(.data.percpu) - LOAD_OFFSET) {
__per_cpu_start = .; __per_cpu_start = .;
*(.data.percpu.page_aligned)
*(.data.percpu) *(.data.percpu)
*(.data.percpu.shared_aligned) *(.data.percpu.shared_aligned)
__per_cpu_end = .; __per_cpu_end = .;
......
...@@ -153,9 +153,10 @@ static int get_irq_server(unsigned int virq, unsigned int strict_check) ...@@ -153,9 +153,10 @@ static int get_irq_server(unsigned int virq, unsigned int strict_check)
{ {
int server; int server;
/* For the moment only implement delivery to all cpus or one cpu */ /* For the moment only implement delivery to all cpus or one cpu */
cpumask_t cpumask = irq_desc[virq].affinity; cpumask_t cpumask;
cpumask_t tmp = CPU_MASK_NONE; cpumask_t tmp = CPU_MASK_NONE;
cpumask_copy(&cpumask, irq_desc[virq].affinity);
if (!distribute_irqs) if (!distribute_irqs)
return default_server; return default_server;
...@@ -869,7 +870,7 @@ void xics_migrate_irqs_away(void) ...@@ -869,7 +870,7 @@ void xics_migrate_irqs_away(void)
virq, cpu); virq, cpu);
/* Reset affinity to all cpus */ /* Reset affinity to all cpus */
irq_desc[virq].affinity = CPU_MASK_ALL; cpumask_setall(irq_desc[virq].affinity);
desc->chip->set_affinity(virq, cpu_all_mask); desc->chip->set_affinity(virq, cpu_all_mask);
unlock: unlock:
spin_unlock_irqrestore(&desc->lock, flags); spin_unlock_irqrestore(&desc->lock, flags);
......
...@@ -566,9 +566,10 @@ static void __init mpic_scan_ht_pics(struct mpic *mpic) ...@@ -566,9 +566,10 @@ static void __init mpic_scan_ht_pics(struct mpic *mpic)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static int irq_choose_cpu(unsigned int virt_irq) static int irq_choose_cpu(unsigned int virt_irq)
{ {
cpumask_t mask = irq_desc[virt_irq].affinity; cpumask_t mask;
int cpuid; int cpuid;
cpumask_copy(&mask, irq_desc[virt_irq].affinity);
if (cpus_equal(mask, CPU_MASK_ALL)) { if (cpus_equal(mask, CPU_MASK_ALL)) {
static int irq_rover; static int irq_rover;
static DEFINE_SPINLOCK(irq_rover_lock); static DEFINE_SPINLOCK(irq_rover_lock);
......
...@@ -3,6 +3,8 @@ ...@@ -3,6 +3,8 @@
#ifdef CONFIG_NEED_MULTIPLE_NODES #ifdef CONFIG_NEED_MULTIPLE_NODES
#include <linux/cpumask.h>
extern struct pglist_data *node_data[]; extern struct pglist_data *node_data[];
#define NODE_DATA(nid) (node_data[nid]) #define NODE_DATA(nid) (node_data[nid])
......
...@@ -252,9 +252,10 @@ struct irq_handler_data { ...@@ -252,9 +252,10 @@ struct irq_handler_data {
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static int irq_choose_cpu(unsigned int virt_irq) static int irq_choose_cpu(unsigned int virt_irq)
{ {
cpumask_t mask = irq_desc[virt_irq].affinity; cpumask_t mask;
int cpuid; int cpuid;
cpumask_copy(&mask, irq_desc[virt_irq].affinity);
if (cpus_equal(mask, CPU_MASK_ALL)) { if (cpus_equal(mask, CPU_MASK_ALL)) {
static int irq_rover; static int irq_rover;
static DEFINE_SPINLOCK(irq_rover_lock); static DEFINE_SPINLOCK(irq_rover_lock);
...@@ -805,7 +806,7 @@ void fixup_irqs(void) ...@@ -805,7 +806,7 @@ void fixup_irqs(void)
!(irq_desc[irq].status & IRQ_PER_CPU)) { !(irq_desc[irq].status & IRQ_PER_CPU)) {
if (irq_desc[irq].chip->set_affinity) if (irq_desc[irq].chip->set_affinity)
irq_desc[irq].chip->set_affinity(irq, irq_desc[irq].chip->set_affinity(irq,
&irq_desc[irq].affinity); irq_desc[irq].affinity);
} }
spin_unlock_irqrestore(&irq_desc[irq].lock, flags); spin_unlock_irqrestore(&irq_desc[irq].lock, flags);
} }
......
...@@ -729,7 +729,7 @@ void timer_interrupt(int irq, struct pt_regs *regs) ...@@ -729,7 +729,7 @@ void timer_interrupt(int irq, struct pt_regs *regs)
irq_enter(); irq_enter();
kstat_this_cpu.irqs[0]++; kstat_incr_irqs_this_cpu(0, irq_to_desc(0));
if (unlikely(!evt->event_handler)) { if (unlikely(!evt->event_handler)) {
printk(KERN_WARNING printk(KERN_WARNING
......
此差异已折叠。
...@@ -50,7 +50,7 @@ config M386 ...@@ -50,7 +50,7 @@ config M386
config M486 config M486
bool "486" bool "486"
depends on X86_32 depends on X86_32
help ---help---
Select this for a 486 series processor, either Intel or one of the Select this for a 486 series processor, either Intel or one of the
compatible processors from AMD, Cyrix, IBM, or Intel. Includes DX, compatible processors from AMD, Cyrix, IBM, or Intel. Includes DX,
DX2, and DX4 variants; also SL/SLC/SLC2/SLC3/SX/SX2 and UMC U5D or DX2, and DX4 variants; also SL/SLC/SLC2/SLC3/SX/SX2 and UMC U5D or
...@@ -59,7 +59,7 @@ config M486 ...@@ -59,7 +59,7 @@ config M486
config M586 config M586
bool "586/K5/5x86/6x86/6x86MX" bool "586/K5/5x86/6x86/6x86MX"
depends on X86_32 depends on X86_32
help ---help---
Select this for an 586 or 686 series processor such as the AMD K5, Select this for an 586 or 686 series processor such as the AMD K5,
the Cyrix 5x86, 6x86 and 6x86MX. This choice does not the Cyrix 5x86, 6x86 and 6x86MX. This choice does not
assume the RDTSC (Read Time Stamp Counter) instruction. assume the RDTSC (Read Time Stamp Counter) instruction.
...@@ -67,21 +67,21 @@ config M586 ...@@ -67,21 +67,21 @@ config M586
config M586TSC config M586TSC
bool "Pentium-Classic" bool "Pentium-Classic"
depends on X86_32 depends on X86_32
help ---help---
Select this for a Pentium Classic processor with the RDTSC (Read Select this for a Pentium Classic processor with the RDTSC (Read
Time Stamp Counter) instruction for benchmarking. Time Stamp Counter) instruction for benchmarking.
config M586MMX config M586MMX
bool "Pentium-MMX" bool "Pentium-MMX"
depends on X86_32 depends on X86_32
help ---help---
Select this for a Pentium with the MMX graphics/multimedia Select this for a Pentium with the MMX graphics/multimedia
extended instructions. extended instructions.
config M686 config M686
bool "Pentium-Pro" bool "Pentium-Pro"
depends on X86_32 depends on X86_32
help ---help---
Select this for Intel Pentium Pro chips. This enables the use of Select this for Intel Pentium Pro chips. This enables the use of
Pentium Pro extended instructions, and disables the init-time guard Pentium Pro extended instructions, and disables the init-time guard
against the f00f bug found in earlier Pentiums. against the f00f bug found in earlier Pentiums.
...@@ -89,7 +89,7 @@ config M686 ...@@ -89,7 +89,7 @@ config M686
config MPENTIUMII config MPENTIUMII
bool "Pentium-II/Celeron(pre-Coppermine)" bool "Pentium-II/Celeron(pre-Coppermine)"
depends on X86_32 depends on X86_32
help ---help---
Select this for Intel chips based on the Pentium-II and Select this for Intel chips based on the Pentium-II and
pre-Coppermine Celeron core. This option enables an unaligned pre-Coppermine Celeron core. This option enables an unaligned
copy optimization, compiles the kernel with optimization flags copy optimization, compiles the kernel with optimization flags
...@@ -99,7 +99,7 @@ config MPENTIUMII ...@@ -99,7 +99,7 @@ config MPENTIUMII
config MPENTIUMIII config MPENTIUMIII
bool "Pentium-III/Celeron(Coppermine)/Pentium-III Xeon" bool "Pentium-III/Celeron(Coppermine)/Pentium-III Xeon"
depends on X86_32 depends on X86_32
help ---help---
Select this for Intel chips based on the Pentium-III and Select this for Intel chips based on the Pentium-III and
Celeron-Coppermine core. This option enables use of some Celeron-Coppermine core. This option enables use of some
extended prefetch instructions in addition to the Pentium II extended prefetch instructions in addition to the Pentium II
...@@ -108,14 +108,14 @@ config MPENTIUMIII ...@@ -108,14 +108,14 @@ config MPENTIUMIII
config MPENTIUMM config MPENTIUMM
bool "Pentium M" bool "Pentium M"
depends on X86_32 depends on X86_32
help ---help---
Select this for Intel Pentium M (not Pentium-4 M) Select this for Intel Pentium M (not Pentium-4 M)
notebook chips. notebook chips.
config MPENTIUM4 config MPENTIUM4
bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon" bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
depends on X86_32 depends on X86_32
help ---help---
Select this for Intel Pentium 4 chips. This includes the Select this for Intel Pentium 4 chips. This includes the
Pentium 4, Pentium D, P4-based Celeron and Xeon, and Pentium 4, Pentium D, P4-based Celeron and Xeon, and
Pentium-4 M (not Pentium M) chips. This option enables compile Pentium-4 M (not Pentium M) chips. This option enables compile
...@@ -151,7 +151,7 @@ config MPENTIUM4 ...@@ -151,7 +151,7 @@ config MPENTIUM4
config MK6 config MK6
bool "K6/K6-II/K6-III" bool "K6/K6-II/K6-III"
depends on X86_32 depends on X86_32
help ---help---
Select this for an AMD K6-family processor. Enables use of Select this for an AMD K6-family processor. Enables use of
some extended instructions, and passes appropriate optimization some extended instructions, and passes appropriate optimization
flags to GCC. flags to GCC.
...@@ -159,14 +159,14 @@ config MK6 ...@@ -159,14 +159,14 @@ config MK6
config MK7 config MK7
bool "Athlon/Duron/K7" bool "Athlon/Duron/K7"
depends on X86_32 depends on X86_32
help ---help---
Select this for an AMD Athlon K7-family processor. Enables use of Select this for an AMD Athlon K7-family processor. Enables use of
some extended instructions, and passes appropriate optimization some extended instructions, and passes appropriate optimization
flags to GCC. flags to GCC.
config MK8 config MK8
bool "Opteron/Athlon64/Hammer/K8" bool "Opteron/Athlon64/Hammer/K8"
help ---help---
Select this for an AMD Opteron or Athlon64 Hammer-family processor. Select this for an AMD Opteron or Athlon64 Hammer-family processor.
Enables use of some extended instructions, and passes appropriate Enables use of some extended instructions, and passes appropriate
optimization flags to GCC. optimization flags to GCC.
...@@ -174,7 +174,7 @@ config MK8 ...@@ -174,7 +174,7 @@ config MK8
config MCRUSOE config MCRUSOE
bool "Crusoe" bool "Crusoe"
depends on X86_32 depends on X86_32
help ---help---
Select this for a Transmeta Crusoe processor. Treats the processor Select this for a Transmeta Crusoe processor. Treats the processor
like a 586 with TSC, and sets some GCC optimization flags (like a like a 586 with TSC, and sets some GCC optimization flags (like a
Pentium Pro with no alignment requirements). Pentium Pro with no alignment requirements).
...@@ -182,13 +182,13 @@ config MCRUSOE ...@@ -182,13 +182,13 @@ config MCRUSOE
config MEFFICEON config MEFFICEON
bool "Efficeon" bool "Efficeon"
depends on X86_32 depends on X86_32
help ---help---
Select this for a Transmeta Efficeon processor. Select this for a Transmeta Efficeon processor.
config MWINCHIPC6 config MWINCHIPC6
bool "Winchip-C6" bool "Winchip-C6"
depends on X86_32 depends on X86_32
help ---help---
Select this for an IDT Winchip C6 chip. Linux and GCC Select this for an IDT Winchip C6 chip. Linux and GCC
treat this chip as a 586TSC with some extended instructions treat this chip as a 586TSC with some extended instructions
and alignment requirements. and alignment requirements.
...@@ -196,7 +196,7 @@ config MWINCHIPC6 ...@@ -196,7 +196,7 @@ config MWINCHIPC6
config MWINCHIP3D config MWINCHIP3D
bool "Winchip-2/Winchip-2A/Winchip-3" bool "Winchip-2/Winchip-2A/Winchip-3"
depends on X86_32 depends on X86_32
help ---help---
Select this for an IDT Winchip-2, 2A or 3. Linux and GCC Select this for an IDT Winchip-2, 2A or 3. Linux and GCC
treat this chip as a 586TSC with some extended instructions treat this chip as a 586TSC with some extended instructions
and alignment requirements. Also enable out of order memory and alignment requirements. Also enable out of order memory
...@@ -206,19 +206,19 @@ config MWINCHIP3D ...@@ -206,19 +206,19 @@ config MWINCHIP3D
config MGEODEGX1 config MGEODEGX1
bool "GeodeGX1" bool "GeodeGX1"
depends on X86_32 depends on X86_32
help ---help---
Select this for a Geode GX1 (Cyrix MediaGX) chip. Select this for a Geode GX1 (Cyrix MediaGX) chip.
config MGEODE_LX config MGEODE_LX
bool "Geode GX/LX" bool "Geode GX/LX"
depends on X86_32 depends on X86_32
help ---help---
Select this for AMD Geode GX and LX processors. Select this for AMD Geode GX and LX processors.
config MCYRIXIII config MCYRIXIII
bool "CyrixIII/VIA-C3" bool "CyrixIII/VIA-C3"
depends on X86_32 depends on X86_32
help ---help---
Select this for a Cyrix III or C3 chip. Presently Linux and GCC Select this for a Cyrix III or C3 chip. Presently Linux and GCC
treat this chip as a generic 586. Whilst the CPU is 686 class, treat this chip as a generic 586. Whilst the CPU is 686 class,
it lacks the cmov extension which gcc assumes is present when it lacks the cmov extension which gcc assumes is present when
...@@ -230,7 +230,7 @@ config MCYRIXIII ...@@ -230,7 +230,7 @@ config MCYRIXIII
config MVIAC3_2 config MVIAC3_2
bool "VIA C3-2 (Nehemiah)" bool "VIA C3-2 (Nehemiah)"
depends on X86_32 depends on X86_32
help ---help---
Select this for a VIA C3 "Nehemiah". Selecting this enables usage Select this for a VIA C3 "Nehemiah". Selecting this enables usage
of SSE and tells gcc to treat the CPU as a 686. of SSE and tells gcc to treat the CPU as a 686.
Note, this kernel will not boot on older (pre model 9) C3s. Note, this kernel will not boot on older (pre model 9) C3s.
...@@ -238,14 +238,14 @@ config MVIAC3_2 ...@@ -238,14 +238,14 @@ config MVIAC3_2
config MVIAC7 config MVIAC7
bool "VIA C7" bool "VIA C7"
depends on X86_32 depends on X86_32
help ---help---
Select this for a VIA C7. Selecting this uses the correct cache Select this for a VIA C7. Selecting this uses the correct cache
shift and tells gcc to treat the CPU as a 686. shift and tells gcc to treat the CPU as a 686.
config MPSC config MPSC
bool "Intel P4 / older Netburst based Xeon" bool "Intel P4 / older Netburst based Xeon"
depends on X86_64 depends on X86_64
help ---help---
Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
Xeon CPUs with Intel 64bit which is compatible with x86-64. Xeon CPUs with Intel 64bit which is compatible with x86-64.
Note that the latest Xeons (Xeon 51xx and 53xx) are not based on the Note that the latest Xeons (Xeon 51xx and 53xx) are not based on the
...@@ -255,7 +255,7 @@ config MPSC ...@@ -255,7 +255,7 @@ config MPSC
config MCORE2 config MCORE2
bool "Core 2/newer Xeon" bool "Core 2/newer Xeon"
help ---help---
Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
53xx) CPUs. You can distinguish newer from older Xeons by the CPU 53xx) CPUs. You can distinguish newer from older Xeons by the CPU
...@@ -265,7 +265,7 @@ config MCORE2 ...@@ -265,7 +265,7 @@ config MCORE2
config GENERIC_CPU config GENERIC_CPU
bool "Generic-x86-64" bool "Generic-x86-64"
depends on X86_64 depends on X86_64
help ---help---
Generic x86-64 CPU. Generic x86-64 CPU.
Run equally well on all x86-64 CPUs. Run equally well on all x86-64 CPUs.
...@@ -274,7 +274,7 @@ endchoice ...@@ -274,7 +274,7 @@ endchoice
config X86_GENERIC config X86_GENERIC
bool "Generic x86 support" bool "Generic x86 support"
depends on X86_32 depends on X86_32
help ---help---
Instead of just including optimizations for the selected Instead of just including optimizations for the selected
x86 variant (e.g. PII, Crusoe or Athlon), include some more x86 variant (e.g. PII, Crusoe or Athlon), include some more
generic optimizations as well. This will make the kernel generic optimizations as well. This will make the kernel
...@@ -294,25 +294,23 @@ config X86_CPU ...@@ -294,25 +294,23 @@ config X86_CPU
# Define implied options from the CPU selection here # Define implied options from the CPU selection here
config X86_L1_CACHE_BYTES config X86_L1_CACHE_BYTES
int int
default "128" if GENERIC_CPU || MPSC default "128" if MPSC
default "64" if MK8 || MCORE2 default "64" if GENERIC_CPU || MK8 || MCORE2 || X86_32
depends on X86_64
config X86_INTERNODE_CACHE_BYTES config X86_INTERNODE_CACHE_BYTES
int int
default "4096" if X86_VSMP default "4096" if X86_VSMP
default X86_L1_CACHE_BYTES if !X86_VSMP default X86_L1_CACHE_BYTES if !X86_VSMP
depends on X86_64
config X86_CMPXCHG config X86_CMPXCHG
def_bool X86_64 || (X86_32 && !M386) def_bool X86_64 || (X86_32 && !M386)
config X86_L1_CACHE_SHIFT config X86_L1_CACHE_SHIFT
int int
default "7" if MPENTIUM4 || X86_GENERIC || GENERIC_CPU || MPSC default "7" if MPENTIUM4 || MPSC
default "4" if X86_ELAN || M486 || M386 || MGEODEGX1 default "4" if X86_ELAN || M486 || M386 || MGEODEGX1
default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MVIAC7 default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MVIAC7 || X86_GENERIC || GENERIC_CPU
config X86_XADD config X86_XADD
def_bool y def_bool y
...@@ -321,7 +319,7 @@ config X86_XADD ...@@ -321,7 +319,7 @@ config X86_XADD
config X86_PPRO_FENCE config X86_PPRO_FENCE
bool "PentiumPro memory ordering errata workaround" bool "PentiumPro memory ordering errata workaround"
depends on M686 || M586MMX || M586TSC || M586 || M486 || M386 || MGEODEGX1 depends on M686 || M586MMX || M586TSC || M586 || M486 || M386 || MGEODEGX1
help ---help---
Old PentiumPro multiprocessor systems had errata that could cause Old PentiumPro multiprocessor systems had errata that could cause
memory operations to violate the x86 ordering standard in rare cases. memory operations to violate the x86 ordering standard in rare cases.
Enabling this option will attempt to work around some (but not all) Enabling this option will attempt to work around some (but not all)
...@@ -414,14 +412,14 @@ config X86_DEBUGCTLMSR ...@@ -414,14 +412,14 @@ config X86_DEBUGCTLMSR
menuconfig PROCESSOR_SELECT menuconfig PROCESSOR_SELECT
bool "Supported processor vendors" if EMBEDDED bool "Supported processor vendors" if EMBEDDED
help ---help---
This lets you choose what x86 vendor support code your kernel This lets you choose what x86 vendor support code your kernel
will include. will include.
config CPU_SUP_INTEL config CPU_SUP_INTEL
default y default y
bool "Support Intel processors" if PROCESSOR_SELECT bool "Support Intel processors" if PROCESSOR_SELECT
help ---help---
This enables detection, tunings and quirks for Intel processors This enables detection, tunings and quirks for Intel processors
You need this enabled if you want your kernel to run on an You need this enabled if you want your kernel to run on an
...@@ -435,7 +433,7 @@ config CPU_SUP_CYRIX_32 ...@@ -435,7 +433,7 @@ config CPU_SUP_CYRIX_32
default y default y
bool "Support Cyrix processors" if PROCESSOR_SELECT bool "Support Cyrix processors" if PROCESSOR_SELECT
depends on !64BIT depends on !64BIT
help ---help---
This enables detection, tunings and quirks for Cyrix processors This enables detection, tunings and quirks for Cyrix processors
You need this enabled if you want your kernel to run on a You need this enabled if you want your kernel to run on a
...@@ -448,7 +446,7 @@ config CPU_SUP_CYRIX_32 ...@@ -448,7 +446,7 @@ config CPU_SUP_CYRIX_32
config CPU_SUP_AMD config CPU_SUP_AMD
default y default y
bool "Support AMD processors" if PROCESSOR_SELECT bool "Support AMD processors" if PROCESSOR_SELECT
help ---help---
This enables detection, tunings and quirks for AMD processors This enables detection, tunings and quirks for AMD processors
You need this enabled if you want your kernel to run on an You need this enabled if you want your kernel to run on an
...@@ -462,7 +460,7 @@ config CPU_SUP_CENTAUR_32 ...@@ -462,7 +460,7 @@ config CPU_SUP_CENTAUR_32
default y default y
bool "Support Centaur processors" if PROCESSOR_SELECT bool "Support Centaur processors" if PROCESSOR_SELECT
depends on !64BIT depends on !64BIT
help ---help---
This enables detection, tunings and quirks for Centaur processors This enables detection, tunings and quirks for Centaur processors
You need this enabled if you want your kernel to run on a You need this enabled if you want your kernel to run on a
...@@ -476,7 +474,7 @@ config CPU_SUP_CENTAUR_64 ...@@ -476,7 +474,7 @@ config CPU_SUP_CENTAUR_64
default y default y
bool "Support Centaur processors" if PROCESSOR_SELECT bool "Support Centaur processors" if PROCESSOR_SELECT
depends on 64BIT depends on 64BIT
help ---help---
This enables detection, tunings and quirks for Centaur processors This enables detection, tunings and quirks for Centaur processors
You need this enabled if you want your kernel to run on a You need this enabled if you want your kernel to run on a
...@@ -490,7 +488,7 @@ config CPU_SUP_TRANSMETA_32 ...@@ -490,7 +488,7 @@ config CPU_SUP_TRANSMETA_32
default y default y
bool "Support Transmeta processors" if PROCESSOR_SELECT bool "Support Transmeta processors" if PROCESSOR_SELECT
depends on !64BIT depends on !64BIT
help ---help---
This enables detection, tunings and quirks for Transmeta processors This enables detection, tunings and quirks for Transmeta processors
You need this enabled if you want your kernel to run on a You need this enabled if you want your kernel to run on a
...@@ -504,7 +502,7 @@ config CPU_SUP_UMC_32 ...@@ -504,7 +502,7 @@ config CPU_SUP_UMC_32
default y default y
bool "Support UMC processors" if PROCESSOR_SELECT bool "Support UMC processors" if PROCESSOR_SELECT
depends on !64BIT depends on !64BIT
help ---help---
This enables detection, tunings and quirks for UMC processors This enables detection, tunings and quirks for UMC processors
You need this enabled if you want your kernel to run on a You need this enabled if you want your kernel to run on a
...@@ -523,7 +521,7 @@ config X86_PTRACE_BTS ...@@ -523,7 +521,7 @@ config X86_PTRACE_BTS
bool "Branch Trace Store" bool "Branch Trace Store"
default y default y
depends on X86_DEBUGCTLMSR depends on X86_DEBUGCTLMSR
help ---help---
This adds a ptrace interface to the hardware's branch trace store. This adds a ptrace interface to the hardware's branch trace store.
Debuggers may use it to collect an execution trace of the debugged Debuggers may use it to collect an execution trace of the debugged
......
...@@ -7,7 +7,7 @@ source "lib/Kconfig.debug" ...@@ -7,7 +7,7 @@ source "lib/Kconfig.debug"
config STRICT_DEVMEM config STRICT_DEVMEM
bool "Filter access to /dev/mem" bool "Filter access to /dev/mem"
help ---help---
If this option is disabled, you allow userspace (root) access to all If this option is disabled, you allow userspace (root) access to all
of memory, including kernel and userspace memory. Accidental of memory, including kernel and userspace memory. Accidental
access to this is obviously disastrous, but specific access can access to this is obviously disastrous, but specific access can
...@@ -25,7 +25,7 @@ config STRICT_DEVMEM ...@@ -25,7 +25,7 @@ config STRICT_DEVMEM
config X86_VERBOSE_BOOTUP config X86_VERBOSE_BOOTUP
bool "Enable verbose x86 bootup info messages" bool "Enable verbose x86 bootup info messages"
default y default y
help ---help---
Enables the informational output from the decompression stage Enables the informational output from the decompression stage
(e.g. bzImage) of the boot. If you disable this you will still (e.g. bzImage) of the boot. If you disable this you will still
see errors. Disable this if you want silent bootup. see errors. Disable this if you want silent bootup.
...@@ -33,7 +33,7 @@ config X86_VERBOSE_BOOTUP ...@@ -33,7 +33,7 @@ config X86_VERBOSE_BOOTUP
config EARLY_PRINTK config EARLY_PRINTK
bool "Early printk" if EMBEDDED bool "Early printk" if EMBEDDED
default y default y
help ---help---
Write kernel log output directly into the VGA buffer or to a serial Write kernel log output directly into the VGA buffer or to a serial
port. port.
...@@ -47,7 +47,7 @@ config EARLY_PRINTK_DBGP ...@@ -47,7 +47,7 @@ config EARLY_PRINTK_DBGP
bool "Early printk via EHCI debug port" bool "Early printk via EHCI debug port"
default n default n
depends on EARLY_PRINTK && PCI depends on EARLY_PRINTK && PCI
help ---help---
Write kernel log output directly into the EHCI debug port. Write kernel log output directly into the EHCI debug port.
This is useful for kernel debugging when your machine crashes very This is useful for kernel debugging when your machine crashes very
...@@ -59,14 +59,14 @@ config EARLY_PRINTK_DBGP ...@@ -59,14 +59,14 @@ config EARLY_PRINTK_DBGP
config DEBUG_STACKOVERFLOW config DEBUG_STACKOVERFLOW
bool "Check for stack overflows" bool "Check for stack overflows"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
help ---help---
This option will cause messages to be printed if free stack space This option will cause messages to be printed if free stack space
drops below a certain limit. drops below a certain limit.
config DEBUG_STACK_USAGE config DEBUG_STACK_USAGE
bool "Stack utilization instrumentation" bool "Stack utilization instrumentation"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
help ---help---
Enables the display of the minimum amount of free stack which each Enables the display of the minimum amount of free stack which each
task has ever had available in the sysrq-T and sysrq-P debug output. task has ever had available in the sysrq-T and sysrq-P debug output.
...@@ -75,7 +75,7 @@ config DEBUG_STACK_USAGE ...@@ -75,7 +75,7 @@ config DEBUG_STACK_USAGE
config DEBUG_PAGEALLOC config DEBUG_PAGEALLOC
bool "Debug page memory allocations" bool "Debug page memory allocations"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
help ---help---
Unmap pages from the kernel linear mapping after free_pages(). Unmap pages from the kernel linear mapping after free_pages().
This results in a large slowdown, but helps to find certain types This results in a large slowdown, but helps to find certain types
of memory corruptions. of memory corruptions.
...@@ -83,9 +83,9 @@ config DEBUG_PAGEALLOC ...@@ -83,9 +83,9 @@ config DEBUG_PAGEALLOC
config DEBUG_PER_CPU_MAPS config DEBUG_PER_CPU_MAPS
bool "Debug access to per_cpu maps" bool "Debug access to per_cpu maps"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
depends on X86_SMP depends on SMP
default n default n
help ---help---
Say Y to verify that the per_cpu map being accessed has Say Y to verify that the per_cpu map being accessed has
been setup. Adds a fair amount of code to kernel memory been setup. Adds a fair amount of code to kernel memory
and decreases performance. and decreases performance.
...@@ -96,7 +96,7 @@ config X86_PTDUMP ...@@ -96,7 +96,7 @@ config X86_PTDUMP
bool "Export kernel pagetable layout to userspace via debugfs" bool "Export kernel pagetable layout to userspace via debugfs"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
select DEBUG_FS select DEBUG_FS
help ---help---
Say Y here if you want to show the kernel pagetable layout in a Say Y here if you want to show the kernel pagetable layout in a
debugfs file. This information is only useful for kernel developers debugfs file. This information is only useful for kernel developers
who are working in architecture specific areas of the kernel. who are working in architecture specific areas of the kernel.
...@@ -108,7 +108,7 @@ config DEBUG_RODATA ...@@ -108,7 +108,7 @@ config DEBUG_RODATA
bool "Write protect kernel read-only data structures" bool "Write protect kernel read-only data structures"
default y default y
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
help ---help---
Mark the kernel read-only data as write-protected in the pagetables, Mark the kernel read-only data as write-protected in the pagetables,
in order to catch accidental (and incorrect) writes to such const in order to catch accidental (and incorrect) writes to such const
data. This is recommended so that we can catch kernel bugs sooner. data. This is recommended so that we can catch kernel bugs sooner.
...@@ -117,7 +117,8 @@ config DEBUG_RODATA ...@@ -117,7 +117,8 @@ config DEBUG_RODATA
config DEBUG_RODATA_TEST config DEBUG_RODATA_TEST
bool "Testcase for the DEBUG_RODATA feature" bool "Testcase for the DEBUG_RODATA feature"
depends on DEBUG_RODATA depends on DEBUG_RODATA
help default y
---help---
This option enables a testcase for the DEBUG_RODATA This option enables a testcase for the DEBUG_RODATA
feature as well as for the change_page_attr() infrastructure. feature as well as for the change_page_attr() infrastructure.
If in doubt, say "N" If in doubt, say "N"
...@@ -125,7 +126,7 @@ config DEBUG_RODATA_TEST ...@@ -125,7 +126,7 @@ config DEBUG_RODATA_TEST
config DEBUG_NX_TEST config DEBUG_NX_TEST
tristate "Testcase for the NX non-executable stack feature" tristate "Testcase for the NX non-executable stack feature"
depends on DEBUG_KERNEL && m depends on DEBUG_KERNEL && m
help ---help---
This option enables a testcase for the CPU NX capability This option enables a testcase for the CPU NX capability
and the software setup of this feature. and the software setup of this feature.
If in doubt, say "N" If in doubt, say "N"
...@@ -133,7 +134,7 @@ config DEBUG_NX_TEST ...@@ -133,7 +134,7 @@ config DEBUG_NX_TEST
config 4KSTACKS config 4KSTACKS
bool "Use 4Kb for kernel stacks instead of 8Kb" bool "Use 4Kb for kernel stacks instead of 8Kb"
depends on X86_32 depends on X86_32
help ---help---
If you say Y here the kernel will use a 4Kb stacksize for the If you say Y here the kernel will use a 4Kb stacksize for the
kernel stack attached to each process/thread. This facilitates kernel stack attached to each process/thread. This facilitates
running more threads on a system and also reduces the pressure running more threads on a system and also reduces the pressure
...@@ -144,7 +145,7 @@ config DOUBLEFAULT ...@@ -144,7 +145,7 @@ config DOUBLEFAULT
default y default y
bool "Enable doublefault exception handler" if EMBEDDED bool "Enable doublefault exception handler" if EMBEDDED
depends on X86_32 depends on X86_32
help ---help---
This option allows trapping of rare doublefault exceptions that This option allows trapping of rare doublefault exceptions that
would otherwise cause a system to silently reboot. Disabling this would otherwise cause a system to silently reboot. Disabling this
option saves about 4k and might cause you much additional grey option saves about 4k and might cause you much additional grey
...@@ -154,7 +155,7 @@ config IOMMU_DEBUG ...@@ -154,7 +155,7 @@ config IOMMU_DEBUG
bool "Enable IOMMU debugging" bool "Enable IOMMU debugging"
depends on GART_IOMMU && DEBUG_KERNEL depends on GART_IOMMU && DEBUG_KERNEL
depends on X86_64 depends on X86_64
help ---help---
Force the IOMMU to on even when you have less than 4GB of Force the IOMMU to on even when you have less than 4GB of
memory and add debugging code. On overflow always panic. And memory and add debugging code. On overflow always panic. And
allow to enable IOMMU leak tracing. Can be disabled at boot allow to enable IOMMU leak tracing. Can be disabled at boot
...@@ -170,7 +171,7 @@ config IOMMU_LEAK ...@@ -170,7 +171,7 @@ config IOMMU_LEAK
bool "IOMMU leak tracing" bool "IOMMU leak tracing"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
depends on IOMMU_DEBUG depends on IOMMU_DEBUG
help ---help---
Add a simple leak tracer to the IOMMU code. This is useful when you Add a simple leak tracer to the IOMMU code. This is useful when you
are debugging a buggy device driver that leaks IOMMU mappings. are debugging a buggy device driver that leaks IOMMU mappings.
...@@ -203,25 +204,25 @@ choice ...@@ -203,25 +204,25 @@ choice
config IO_DELAY_0X80 config IO_DELAY_0X80
bool "port 0x80 based port-IO delay [recommended]" bool "port 0x80 based port-IO delay [recommended]"
help ---help---
This is the traditional Linux IO delay used for in/out_p. This is the traditional Linux IO delay used for in/out_p.
It is the most tested hence safest selection here. It is the most tested hence safest selection here.
config IO_DELAY_0XED config IO_DELAY_0XED
bool "port 0xed based port-IO delay" bool "port 0xed based port-IO delay"
help ---help---
Use port 0xed as the IO delay. This frees up port 0x80 which is Use port 0xed as the IO delay. This frees up port 0x80 which is
often used as a hardware-debug port. often used as a hardware-debug port.
config IO_DELAY_UDELAY config IO_DELAY_UDELAY
bool "udelay based port-IO delay" bool "udelay based port-IO delay"
help ---help---
Use udelay(2) as the IO delay method. This provides the delay Use udelay(2) as the IO delay method. This provides the delay
while not having any side-effect on the IO port space. while not having any side-effect on the IO port space.
config IO_DELAY_NONE config IO_DELAY_NONE
bool "no port-IO delay" bool "no port-IO delay"
help ---help---
No port-IO delay. Will break on old boxes that require port-IO No port-IO delay. Will break on old boxes that require port-IO
delay for certain operations. Should work on most new machines. delay for certain operations. Should work on most new machines.
...@@ -255,18 +256,18 @@ config DEBUG_BOOT_PARAMS ...@@ -255,18 +256,18 @@ config DEBUG_BOOT_PARAMS
bool "Debug boot parameters" bool "Debug boot parameters"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
depends on DEBUG_FS depends on DEBUG_FS
help ---help---
This option will cause struct boot_params to be exported via debugfs. This option will cause struct boot_params to be exported via debugfs.
config CPA_DEBUG config CPA_DEBUG
bool "CPA self-test code" bool "CPA self-test code"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
help ---help---
Do change_page_attr() self-tests every 30 seconds. Do change_page_attr() self-tests every 30 seconds.
config OPTIMIZE_INLINING config OPTIMIZE_INLINING
bool "Allow gcc to uninline functions marked 'inline'" bool "Allow gcc to uninline functions marked 'inline'"
help ---help---
This option determines if the kernel forces gcc to inline the functions This option determines if the kernel forces gcc to inline the functions
developers have marked 'inline'. Doing so takes away freedom from gcc to developers have marked 'inline'. Doing so takes away freedom from gcc to
do what it thinks is best, which is desirable for the gcc 3.x series of do what it thinks is best, which is desirable for the gcc 3.x series of
...@@ -279,4 +280,3 @@ config OPTIMIZE_INLINING ...@@ -279,4 +280,3 @@ config OPTIMIZE_INLINING
If unsure, say N. If unsure, say N.
endmenu endmenu
...@@ -70,14 +70,17 @@ else ...@@ -70,14 +70,17 @@ else
# this works around some issues with generating unwind tables in older gccs # this works around some issues with generating unwind tables in older gccs
# newer gccs do it by default # newer gccs do it by default
KBUILD_CFLAGS += -maccumulate-outgoing-args KBUILD_CFLAGS += -maccumulate-outgoing-args
endif
stackp := $(CONFIG_SHELL) $(srctree)/scripts/gcc-x86_64-has-stack-protector.sh ifdef CONFIG_CC_STACKPROTECTOR
stackp-$(CONFIG_CC_STACKPROTECTOR) := $(shell $(stackp) \ cc_has_sp := $(srctree)/scripts/gcc-x86_$(BITS)-has-stack-protector.sh
"$(CC)" -fstack-protector ) ifeq ($(shell $(CONFIG_SHELL) $(cc_has_sp) $(CC)),y)
stackp-$(CONFIG_CC_STACKPROTECTOR_ALL) += $(shell $(stackp) \ stackp-y := -fstack-protector
"$(CC)" -fstack-protector-all ) stackp-$(CONFIG_CC_STACKPROTECTOR_ALL) += -fstack-protector-all
KBUILD_CFLAGS += $(stackp-y)
KBUILD_CFLAGS += $(stackp-y) else
$(warning stack protector enabled but no compiler support)
endif
endif endif
# Stackpointer is addressed different for 32 bit and 64 bit x86 # Stackpointer is addressed different for 32 bit and 64 bit x86
...@@ -102,29 +105,6 @@ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables ...@@ -102,29 +105,6 @@ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
# prevent gcc from generating any FP code by mistake # prevent gcc from generating any FP code by mistake
KBUILD_CFLAGS += $(call cc-option,-mno-sse -mno-mmx -mno-sse2 -mno-3dnow,) KBUILD_CFLAGS += $(call cc-option,-mno-sse -mno-mmx -mno-sse2 -mno-3dnow,)
###
# Sub architecture support
# fcore-y is linked before mcore-y files.
# Default subarch .c files
mcore-y := arch/x86/mach-default/
# Voyager subarch support
mflags-$(CONFIG_X86_VOYAGER) := -Iarch/x86/include/asm/mach-voyager
mcore-$(CONFIG_X86_VOYAGER) := arch/x86/mach-voyager/
# generic subarchitecture
mflags-$(CONFIG_X86_GENERICARCH):= -Iarch/x86/include/asm/mach-generic
fcore-$(CONFIG_X86_GENERICARCH) += arch/x86/mach-generic/
mcore-$(CONFIG_X86_GENERICARCH) := arch/x86/mach-default/
# default subarch .h files
mflags-y += -Iarch/x86/include/asm/mach-default
# 64 bit does not support subarch support - clear sub arch variables
fcore-$(CONFIG_X86_64) :=
mcore-$(CONFIG_X86_64) :=
KBUILD_CFLAGS += $(mflags-y) KBUILD_CFLAGS += $(mflags-y)
KBUILD_AFLAGS += $(mflags-y) KBUILD_AFLAGS += $(mflags-y)
...@@ -150,9 +130,6 @@ core-$(CONFIG_LGUEST_GUEST) += arch/x86/lguest/ ...@@ -150,9 +130,6 @@ core-$(CONFIG_LGUEST_GUEST) += arch/x86/lguest/
core-y += arch/x86/kernel/ core-y += arch/x86/kernel/
core-y += arch/x86/mm/ core-y += arch/x86/mm/
# Remaining sub architecture files
core-y += $(mcore-y)
core-y += arch/x86/crypto/ core-y += arch/x86/crypto/
core-y += arch/x86/vdso/ core-y += arch/x86/vdso/
core-$(CONFIG_IA32_EMULATION) += arch/x86/ia32/ core-$(CONFIG_IA32_EMULATION) += arch/x86/ia32/
......
...@@ -32,7 +32,6 @@ setup-y += a20.o cmdline.o copy.o cpu.o cpucheck.o edd.o ...@@ -32,7 +32,6 @@ setup-y += a20.o cmdline.o copy.o cpu.o cpucheck.o edd.o
setup-y += header.o main.o mca.o memory.o pm.o pmjump.o setup-y += header.o main.o mca.o memory.o pm.o pmjump.o
setup-y += printf.o string.o tty.o video.o video-mode.o version.o setup-y += printf.o string.o tty.o video.o video-mode.o version.o
setup-$(CONFIG_X86_APM_BOOT) += apm.o setup-$(CONFIG_X86_APM_BOOT) += apm.o
setup-$(CONFIG_X86_VOYAGER) += voyager.o
# The link order of the video-*.o modules can matter. In particular, # The link order of the video-*.o modules can matter. In particular,
# video-vga.o *must* be listed first, followed by video-vesa.o. # video-vga.o *must* be listed first, followed by video-vesa.o.
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
* *
* Copyright (C) 1991, 1992 Linus Torvalds * Copyright (C) 1991, 1992 Linus Torvalds
* Copyright 2007-2008 rPath, Inc. - All Rights Reserved * Copyright 2007-2008 rPath, Inc. - All Rights Reserved
* Copyright 2009 Intel Corporation
* *
* This file is part of the Linux kernel, and is made available under * This file is part of the Linux kernel, and is made available under
* the terms of the GNU General Public License version 2. * the terms of the GNU General Public License version 2.
...@@ -15,16 +16,23 @@ ...@@ -15,16 +16,23 @@
#include "boot.h" #include "boot.h"
#define MAX_8042_LOOPS 100000 #define MAX_8042_LOOPS 100000
#define MAX_8042_FF 32
static int empty_8042(void) static int empty_8042(void)
{ {
u8 status; u8 status;
int loops = MAX_8042_LOOPS; int loops = MAX_8042_LOOPS;
int ffs = MAX_8042_FF;
while (loops--) { while (loops--) {
io_delay(); io_delay();
status = inb(0x64); status = inb(0x64);
if (status == 0xff) {
/* FF is a plausible, but very unlikely status */
if (!--ffs)
return -1; /* Assume no KBC present */
}
if (status & 1) { if (status & 1) {
/* Read and discard input data */ /* Read and discard input data */
io_delay(); io_delay();
...@@ -118,44 +126,37 @@ static void enable_a20_fast(void) ...@@ -118,44 +126,37 @@ static void enable_a20_fast(void)
int enable_a20(void) int enable_a20(void)
{ {
#if defined(CONFIG_X86_ELAN)
/* Elan croaks if we try to touch the KBC */
enable_a20_fast();
while (!a20_test_long())
;
return 0;
#elif defined(CONFIG_X86_VOYAGER)
/* On Voyager, a20_test() is unsafe? */
enable_a20_kbc();
return 0;
#else
int loops = A20_ENABLE_LOOPS; int loops = A20_ENABLE_LOOPS;
while (loops--) { int kbc_err;
/* First, check to see if A20 is already enabled
(legacy free, etc.) */ while (loops--) {
if (a20_test_short()) /* First, check to see if A20 is already enabled
return 0; (legacy free, etc.) */
if (a20_test_short())
/* Next, try the BIOS (INT 0x15, AX=0x2401) */ return 0;
enable_a20_bios();
if (a20_test_short()) /* Next, try the BIOS (INT 0x15, AX=0x2401) */
return 0; enable_a20_bios();
if (a20_test_short())
/* Try enabling A20 through the keyboard controller */ return 0;
empty_8042();
if (a20_test_short()) /* Try enabling A20 through the keyboard controller */
return 0; /* BIOS worked, but with delayed reaction */ kbc_err = empty_8042();
enable_a20_kbc(); if (a20_test_short())
if (a20_test_long()) return 0; /* BIOS worked, but with delayed reaction */
return 0;
if (!kbc_err) {
/* Finally, try enabling the "fast A20 gate" */ enable_a20_kbc();
enable_a20_fast(); if (a20_test_long())
if (a20_test_long()) return 0;
return 0; }
}
/* Finally, try enabling the "fast A20 gate" */
return -1; enable_a20_fast();
#endif if (a20_test_long())
return 0;
}
return -1;
} }
...@@ -302,9 +302,6 @@ void probe_cards(int unsafe); ...@@ -302,9 +302,6 @@ void probe_cards(int unsafe);
/* video-vesa.c */ /* video-vesa.c */
void vesa_store_edid(void); void vesa_store_edid(void);
/* voyager.c */
int query_voyager(void);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* BOOT_BOOT_H */ #endif /* BOOT_BOOT_H */
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
# create a compressed vmlinux image from the original vmlinux # create a compressed vmlinux image from the original vmlinux
# #
targets := vmlinux vmlinux.bin vmlinux.bin.gz head_$(BITS).o misc.o piggy.o targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma head_$(BITS).o misc.o piggy.o
KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2 KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2
KBUILD_CFLAGS += -fno-strict-aliasing -fPIC KBUILD_CFLAGS += -fno-strict-aliasing -fPIC
...@@ -47,18 +47,35 @@ ifeq ($(CONFIG_X86_32),y) ...@@ -47,18 +47,35 @@ ifeq ($(CONFIG_X86_32),y)
ifdef CONFIG_RELOCATABLE ifdef CONFIG_RELOCATABLE
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,gzip) $(call if_changed,gzip)
$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,bzip2)
$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,lzma)
else else
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip) $(call if_changed,gzip)
$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin FORCE
$(call if_changed,bzip2)
$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin FORCE
$(call if_changed,lzma)
endif endif
LDFLAGS_piggy.o := -r --format binary --oformat elf32-i386 -T LDFLAGS_piggy.o := -r --format binary --oformat elf32-i386 -T
else else
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip) $(call if_changed,gzip)
$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin FORCE
$(call if_changed,bzip2)
$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin FORCE
$(call if_changed,lzma)
LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T
endif endif
$(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.gz FORCE suffix_$(CONFIG_KERNEL_GZIP) = gz
suffix_$(CONFIG_KERNEL_BZIP2) = bz2
suffix_$(CONFIG_KERNEL_LZMA) = lzma
$(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.$(suffix_y) FORCE
$(call if_changed,ld) $(call if_changed,ld)
...@@ -25,14 +25,12 @@ ...@@ -25,14 +25,12 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/segment.h> #include <asm/segment.h>
#include <asm/page.h> #include <asm/page_types.h>
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
.section ".text.head","ax",@progbits .section ".text.head","ax",@progbits
.globl startup_32 ENTRY(startup_32)
startup_32:
cld cld
/* test KEEP_SEGMENTS flag to see if the bootloader is asking /* test KEEP_SEGMENTS flag to see if the bootloader is asking
* us to not reload segments */ * us to not reload segments */
...@@ -113,6 +111,8 @@ startup_32: ...@@ -113,6 +111,8 @@ startup_32:
*/ */
leal relocated(%ebx), %eax leal relocated(%ebx), %eax
jmp *%eax jmp *%eax
ENDPROC(startup_32)
.section ".text" .section ".text"
relocated: relocated:
......
...@@ -26,8 +26,8 @@ ...@@ -26,8 +26,8 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/segment.h> #include <asm/segment.h>
#include <asm/pgtable.h> #include <asm/pgtable_types.h>
#include <asm/page.h> #include <asm/page_types.h>
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/msr.h> #include <asm/msr.h>
#include <asm/processor-flags.h> #include <asm/processor-flags.h>
...@@ -35,9 +35,7 @@ ...@@ -35,9 +35,7 @@
.section ".text.head" .section ".text.head"
.code32 .code32
.globl startup_32 ENTRY(startup_32)
startup_32:
cld cld
/* test KEEP_SEGMENTS flag to see if the bootloader is asking /* test KEEP_SEGMENTS flag to see if the bootloader is asking
* us to not reload segments */ * us to not reload segments */
...@@ -176,6 +174,7 @@ startup_32: ...@@ -176,6 +174,7 @@ startup_32:
/* Jump from 32bit compatibility mode into 64bit mode. */ /* Jump from 32bit compatibility mode into 64bit mode. */
lret lret
ENDPROC(startup_32)
no_longmode: no_longmode:
/* This isn't an x86-64 CPU so hang */ /* This isn't an x86-64 CPU so hang */
...@@ -295,7 +294,6 @@ relocated: ...@@ -295,7 +294,6 @@ relocated:
call decompress_kernel call decompress_kernel
popq %rsi popq %rsi
/* /*
* Jump to the decompressed kernel. * Jump to the decompressed kernel.
*/ */
......
...@@ -116,71 +116,13 @@ ...@@ -116,71 +116,13 @@
/* /*
* gzip declarations * gzip declarations
*/ */
#define OF(args) args
#define STATIC static #define STATIC static
#undef memset #undef memset
#undef memcpy #undef memcpy
#define memzero(s, n) memset((s), 0, (n)) #define memzero(s, n) memset((s), 0, (n))
typedef unsigned char uch;
typedef unsigned short ush;
typedef unsigned long ulg;
/*
* Window size must be at least 32k, and a power of two.
* We don't actually have a window just a huge output buffer,
* so we report a 2G window size, as that should always be
* larger than our output buffer:
*/
#define WSIZE 0x80000000
/* Input buffer: */
static unsigned char *inbuf;
/* Sliding window buffer (and final output buffer): */
static unsigned char *window;
/* Valid bytes in inbuf: */
static unsigned insize;
/* Index of next byte to be processed in inbuf: */
static unsigned inptr;
/* Bytes in output buffer: */
static unsigned outcnt;
/* gzip flag byte */
#define ASCII_FLAG 0x01 /* bit 0 set: file probably ASCII text */
#define CONTINUATION 0x02 /* bit 1 set: continuation of multi-part gz file */
#define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */
#define ORIG_NAM 0x08 /* bit 3 set: original file name present */
#define COMMENT 0x10 /* bit 4 set: file comment present */
#define ENCRYPTED 0x20 /* bit 5 set: file is encrypted */
#define RESERVED 0xC0 /* bit 6, 7: reserved */
#define get_byte() (inptr < insize ? inbuf[inptr++] : fill_inbuf())
/* Diagnostic functions */
#ifdef DEBUG
# define Assert(cond, msg) do { if (!(cond)) error(msg); } while (0)
# define Trace(x) do { fprintf x; } while (0)
# define Tracev(x) do { if (verbose) fprintf x ; } while (0)
# define Tracevv(x) do { if (verbose > 1) fprintf x ; } while (0)
# define Tracec(c, x) do { if (verbose && (c)) fprintf x ; } while (0)
# define Tracecv(c, x) do { if (verbose > 1 && (c)) fprintf x ; } while (0)
#else
# define Assert(cond, msg)
# define Trace(x)
# define Tracev(x)
# define Tracevv(x)
# define Tracec(c, x)
# define Tracecv(c, x)
#endif
static int fill_inbuf(void);
static void flush_window(void);
static void error(char *m); static void error(char *m);
/* /*
...@@ -189,13 +131,8 @@ static void error(char *m); ...@@ -189,13 +131,8 @@ static void error(char *m);
static struct boot_params *real_mode; /* Pointer to real-mode data */ static struct boot_params *real_mode; /* Pointer to real-mode data */
static int quiet; static int quiet;
extern unsigned char input_data[];
extern int input_len;
static long bytes_out;
static void *memset(void *s, int c, unsigned n); static void *memset(void *s, int c, unsigned n);
static void *memcpy(void *dest, const void *src, unsigned n); void *memcpy(void *dest, const void *src, unsigned n);
static void __putstr(int, const char *); static void __putstr(int, const char *);
#define putstr(__x) __putstr(0, __x) #define putstr(__x) __putstr(0, __x)
...@@ -213,7 +150,17 @@ static char *vidmem; ...@@ -213,7 +150,17 @@ static char *vidmem;
static int vidport; static int vidport;
static int lines, cols; static int lines, cols;
#include "../../../../lib/inflate.c" #ifdef CONFIG_KERNEL_GZIP
#include "../../../../lib/decompress_inflate.c"
#endif
#ifdef CONFIG_KERNEL_BZIP2
#include "../../../../lib/decompress_bunzip2.c"
#endif
#ifdef CONFIG_KERNEL_LZMA
#include "../../../../lib/decompress_unlzma.c"
#endif
static void scroll(void) static void scroll(void)
{ {
...@@ -282,7 +229,7 @@ static void *memset(void *s, int c, unsigned n) ...@@ -282,7 +229,7 @@ static void *memset(void *s, int c, unsigned n)
return s; return s;
} }
static void *memcpy(void *dest, const void *src, unsigned n) void *memcpy(void *dest, const void *src, unsigned n)
{ {
int i; int i;
const char *s = src; const char *s = src;
...@@ -293,38 +240,6 @@ static void *memcpy(void *dest, const void *src, unsigned n) ...@@ -293,38 +240,6 @@ static void *memcpy(void *dest, const void *src, unsigned n)
return dest; return dest;
} }
/* ===========================================================================
* Fill the input buffer. This is called only when the buffer is empty
* and at least one byte is really needed.
*/
static int fill_inbuf(void)
{
error("ran out of input data");
return 0;
}
/* ===========================================================================
* Write the output window window[0..outcnt-1] and update crc and bytes_out.
* (Used for the decompressed data only.)
*/
static void flush_window(void)
{
/* With my window equal to my output buffer
* I only need to compute the crc here.
*/
unsigned long c = crc; /* temporary variable */
unsigned n;
unsigned char *in, ch;
in = window;
for (n = 0; n < outcnt; n++) {
ch = *in++;
c = crc_32_tab[((int)c ^ ch) & 0xff] ^ (c >> 8);
}
crc = c;
bytes_out += (unsigned long)outcnt;
outcnt = 0;
}
static void error(char *x) static void error(char *x)
{ {
...@@ -407,12 +322,8 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap, ...@@ -407,12 +322,8 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap,
lines = real_mode->screen_info.orig_video_lines; lines = real_mode->screen_info.orig_video_lines;
cols = real_mode->screen_info.orig_video_cols; cols = real_mode->screen_info.orig_video_cols;
window = output; /* Output buffer (Normally at 1M) */
free_mem_ptr = heap; /* Heap */ free_mem_ptr = heap; /* Heap */
free_mem_end_ptr = heap + BOOT_HEAP_SIZE; free_mem_end_ptr = heap + BOOT_HEAP_SIZE;
inbuf = input_data; /* Input buffer */
insize = input_len;
inptr = 0;
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
if ((unsigned long)output & (__KERNEL_ALIGN - 1)) if ((unsigned long)output & (__KERNEL_ALIGN - 1))
...@@ -430,10 +341,9 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap, ...@@ -430,10 +341,9 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap,
#endif #endif
#endif #endif
makecrc();
if (!quiet) if (!quiet)
putstr("\nDecompressing Linux... "); putstr("\nDecompressing Linux... ");
gunzip(); decompress(input_data, input_len, NULL, NULL, output, NULL, error);
parse_elf(output); parse_elf(output);
if (!quiet) if (!quiet)
putstr("done.\nBooting the kernel.\n"); putstr("done.\nBooting the kernel.\n");
......
...@@ -8,6 +8,8 @@ ...@@ -8,6 +8,8 @@
* *
* ----------------------------------------------------------------------- */ * ----------------------------------------------------------------------- */
#include <linux/linkage.h>
/* /*
* Memory copy routines * Memory copy routines
*/ */
...@@ -15,9 +17,7 @@ ...@@ -15,9 +17,7 @@
.code16gcc .code16gcc
.text .text
.globl memcpy GLOBAL(memcpy)
.type memcpy, @function
memcpy:
pushw %si pushw %si
pushw %di pushw %di
movw %ax, %di movw %ax, %di
...@@ -31,11 +31,9 @@ memcpy: ...@@ -31,11 +31,9 @@ memcpy:
popw %di popw %di
popw %si popw %si
ret ret
.size memcpy, .-memcpy ENDPROC(memcpy)
.globl memset GLOBAL(memset)
.type memset, @function
memset:
pushw %di pushw %di
movw %ax, %di movw %ax, %di
movzbl %dl, %eax movzbl %dl, %eax
...@@ -48,52 +46,42 @@ memset: ...@@ -48,52 +46,42 @@ memset:
rep; stosb rep; stosb
popw %di popw %di
ret ret
.size memset, .-memset ENDPROC(memset)
.globl copy_from_fs GLOBAL(copy_from_fs)
.type copy_from_fs, @function
copy_from_fs:
pushw %ds pushw %ds
pushw %fs pushw %fs
popw %ds popw %ds
call memcpy call memcpy
popw %ds popw %ds
ret ret
.size copy_from_fs, .-copy_from_fs ENDPROC(copy_from_fs)
.globl copy_to_fs GLOBAL(copy_to_fs)
.type copy_to_fs, @function
copy_to_fs:
pushw %es pushw %es
pushw %fs pushw %fs
popw %es popw %es
call memcpy call memcpy
popw %es popw %es
ret ret
.size copy_to_fs, .-copy_to_fs ENDPROC(copy_to_fs)
#if 0 /* Not currently used, but can be enabled as needed */ #if 0 /* Not currently used, but can be enabled as needed */
GLOBAL(copy_from_gs)
.globl copy_from_gs
.type copy_from_gs, @function
copy_from_gs:
pushw %ds pushw %ds
pushw %gs pushw %gs
popw %ds popw %ds
call memcpy call memcpy
popw %ds popw %ds
ret ret
.size copy_from_gs, .-copy_from_gs ENDPROC(copy_from_gs)
.globl copy_to_gs
.type copy_to_gs, @function GLOBAL(copy_to_gs)
copy_to_gs:
pushw %es pushw %es
pushw %gs pushw %gs
popw %es popw %es
call memcpy call memcpy
popw %es popw %es
ret ret
.size copy_to_gs, .-copy_to_gs ENDPROC(copy_to_gs)
#endif #endif
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
#include <linux/utsrelease.h> #include <linux/utsrelease.h>
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/e820.h> #include <asm/e820.h>
#include <asm/page.h> #include <asm/page_types.h>
#include <asm/setup.h> #include <asm/setup.h>
#include "boot.h" #include "boot.h"
#include "offsets.h" #include "offsets.h"
......
...@@ -149,11 +149,6 @@ void main(void) ...@@ -149,11 +149,6 @@ void main(void)
/* Query MCA information */ /* Query MCA information */
query_mca(); query_mca();
/* Voyager */
#ifdef CONFIG_X86_VOYAGER
query_voyager();
#endif
/* Query Intel SpeedStep (IST) information */ /* Query Intel SpeedStep (IST) information */
query_ist(); query_ist();
......
...@@ -15,18 +15,15 @@ ...@@ -15,18 +15,15 @@
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/processor-flags.h> #include <asm/processor-flags.h>
#include <asm/segment.h> #include <asm/segment.h>
#include <linux/linkage.h>
.text .text
.globl protected_mode_jump
.type protected_mode_jump, @function
.code16 .code16
/* /*
* void protected_mode_jump(u32 entrypoint, u32 bootparams); * void protected_mode_jump(u32 entrypoint, u32 bootparams);
*/ */
protected_mode_jump: GLOBAL(protected_mode_jump)
movl %edx, %esi # Pointer to boot_params table movl %edx, %esi # Pointer to boot_params table
xorl %ebx, %ebx xorl %ebx, %ebx
...@@ -47,12 +44,10 @@ protected_mode_jump: ...@@ -47,12 +44,10 @@ protected_mode_jump:
.byte 0x66, 0xea # ljmpl opcode .byte 0x66, 0xea # ljmpl opcode
2: .long in_pm32 # offset 2: .long in_pm32 # offset
.word __BOOT_CS # segment .word __BOOT_CS # segment
ENDPROC(protected_mode_jump)
.size protected_mode_jump, .-protected_mode_jump
.code32 .code32
.type in_pm32, @function GLOBAL(in_pm32)
in_pm32:
# Set up data segments for flat 32-bit mode # Set up data segments for flat 32-bit mode
movl %ecx, %ds movl %ecx, %ds
movl %ecx, %es movl %ecx, %es
...@@ -78,5 +73,4 @@ in_pm32: ...@@ -78,5 +73,4 @@ in_pm32:
lldt %cx lldt %cx
jmpl *%eax # Jump to the 32-bit entrypoint jmpl *%eax # Jump to the 32-bit entrypoint
ENDPROC(in_pm32)
.size in_pm32, .-in_pm32
/* -*- linux-c -*- ------------------------------------------------------- *
*
* Copyright (C) 1991, 1992 Linus Torvalds
* Copyright 2007 rPath, Inc. - All Rights Reserved
*
* This file is part of the Linux kernel, and is made available under
* the terms of the GNU General Public License version 2.
*
* ----------------------------------------------------------------------- */
/*
* Get the Voyager config information
*/
#include "boot.h"
int query_voyager(void)
{
u8 err;
u16 es, di;
/* Abuse the apm_bios_info area for this */
u8 *data_ptr = (u8 *)&boot_params.apm_bios_info;
data_ptr[0] = 0xff; /* Flag on config not found(?) */
asm("pushw %%es ; "
"int $0x15 ; "
"setc %0 ; "
"movw %%es, %1 ; "
"popw %%es"
: "=q" (err), "=r" (es), "=D" (di)
: "a" (0xffc0));
if (err)
return -1; /* Not Voyager */
set_fs(es);
copy_from_fs(data_ptr, di, 7); /* Table is 7 bytes apparently */
return 0;
}
此差异已折叠。
此差异已折叠。
...@@ -33,8 +33,6 @@ ...@@ -33,8 +33,6 @@
#include <asm/sigframe.h> #include <asm/sigframe.h>
#include <asm/sys_ia32.h> #include <asm/sys_ia32.h>
#define DEBUG_SIG 0
#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP))) #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
#define FIX_EFLAGS (X86_EFLAGS_AC | X86_EFLAGS_OF | \ #define FIX_EFLAGS (X86_EFLAGS_AC | X86_EFLAGS_OF | \
...@@ -46,78 +44,83 @@ void signal_fault(struct pt_regs *regs, void __user *frame, char *where); ...@@ -46,78 +44,83 @@ void signal_fault(struct pt_regs *regs, void __user *frame, char *where);
int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from) int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
{ {
int err; int err = 0;
if (!access_ok(VERIFY_WRITE, to, sizeof(compat_siginfo_t))) if (!access_ok(VERIFY_WRITE, to, sizeof(compat_siginfo_t)))
return -EFAULT; return -EFAULT;
/* If you change siginfo_t structure, please make sure that put_user_try {
this code is fixed accordingly. /* If you change siginfo_t structure, please make sure that
It should never copy any pad contained in the structure this code is fixed accordingly.
to avoid security leaks, but must copy the generic It should never copy any pad contained in the structure
3 ints plus the relevant union member. */ to avoid security leaks, but must copy the generic
err = __put_user(from->si_signo, &to->si_signo); 3 ints plus the relevant union member. */
err |= __put_user(from->si_errno, &to->si_errno); put_user_ex(from->si_signo, &to->si_signo);
err |= __put_user((short)from->si_code, &to->si_code); put_user_ex(from->si_errno, &to->si_errno);
put_user_ex((short)from->si_code, &to->si_code);
if (from->si_code < 0) {
err |= __put_user(from->si_pid, &to->si_pid); if (from->si_code < 0) {
err |= __put_user(from->si_uid, &to->si_uid); put_user_ex(from->si_pid, &to->si_pid);
err |= __put_user(ptr_to_compat(from->si_ptr), &to->si_ptr); put_user_ex(from->si_uid, &to->si_uid);
} else { put_user_ex(ptr_to_compat(from->si_ptr), &to->si_ptr);
/* } else {
* First 32bits of unions are always present: /*
* si_pid === si_band === si_tid === si_addr(LS half) * First 32bits of unions are always present:
*/ * si_pid === si_band === si_tid === si_addr(LS half)
err |= __put_user(from->_sifields._pad[0], */
&to->_sifields._pad[0]); put_user_ex(from->_sifields._pad[0],
switch (from->si_code >> 16) { &to->_sifields._pad[0]);
case __SI_FAULT >> 16: switch (from->si_code >> 16) {
break; case __SI_FAULT >> 16:
case __SI_CHLD >> 16: break;
err |= __put_user(from->si_utime, &to->si_utime); case __SI_CHLD >> 16:
err |= __put_user(from->si_stime, &to->si_stime); put_user_ex(from->si_utime, &to->si_utime);
err |= __put_user(from->si_status, &to->si_status); put_user_ex(from->si_stime, &to->si_stime);
/* FALL THROUGH */ put_user_ex(from->si_status, &to->si_status);
default: /* FALL THROUGH */
case __SI_KILL >> 16: default:
err |= __put_user(from->si_uid, &to->si_uid); case __SI_KILL >> 16:
break; put_user_ex(from->si_uid, &to->si_uid);
case __SI_POLL >> 16: break;
err |= __put_user(from->si_fd, &to->si_fd); case __SI_POLL >> 16:
break; put_user_ex(from->si_fd, &to->si_fd);
case __SI_TIMER >> 16: break;
err |= __put_user(from->si_overrun, &to->si_overrun); case __SI_TIMER >> 16:
err |= __put_user(ptr_to_compat(from->si_ptr), put_user_ex(from->si_overrun, &to->si_overrun);
&to->si_ptr); put_user_ex(ptr_to_compat(from->si_ptr),
break; &to->si_ptr);
/* This is not generated by the kernel as of now. */ break;
case __SI_RT >> 16: /* This is not generated by the kernel as of now. */
case __SI_MESGQ >> 16: case __SI_RT >> 16:
err |= __put_user(from->si_uid, &to->si_uid); case __SI_MESGQ >> 16:
err |= __put_user(from->si_int, &to->si_int); put_user_ex(from->si_uid, &to->si_uid);
break; put_user_ex(from->si_int, &to->si_int);
break;
}
} }
} } put_user_catch(err);
return err; return err;
} }
int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from) int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from)
{ {
int err; int err = 0;
u32 ptr32; u32 ptr32;
if (!access_ok(VERIFY_READ, from, sizeof(compat_siginfo_t))) if (!access_ok(VERIFY_READ, from, sizeof(compat_siginfo_t)))
return -EFAULT; return -EFAULT;
err = __get_user(to->si_signo, &from->si_signo); get_user_try {
err |= __get_user(to->si_errno, &from->si_errno); get_user_ex(to->si_signo, &from->si_signo);
err |= __get_user(to->si_code, &from->si_code); get_user_ex(to->si_errno, &from->si_errno);
get_user_ex(to->si_code, &from->si_code);
err |= __get_user(to->si_pid, &from->si_pid); get_user_ex(to->si_pid, &from->si_pid);
err |= __get_user(to->si_uid, &from->si_uid); get_user_ex(to->si_uid, &from->si_uid);
err |= __get_user(ptr32, &from->si_ptr); get_user_ex(ptr32, &from->si_ptr);
to->si_ptr = compat_ptr(ptr32); to->si_ptr = compat_ptr(ptr32);
} get_user_catch(err);
return err; return err;
} }
...@@ -142,17 +145,23 @@ asmlinkage long sys32_sigaltstack(const stack_ia32_t __user *uss_ptr, ...@@ -142,17 +145,23 @@ asmlinkage long sys32_sigaltstack(const stack_ia32_t __user *uss_ptr,
struct pt_regs *regs) struct pt_regs *regs)
{ {
stack_t uss, uoss; stack_t uss, uoss;
int ret; int ret, err = 0;
mm_segment_t seg; mm_segment_t seg;
if (uss_ptr) { if (uss_ptr) {
u32 ptr; u32 ptr;
memset(&uss, 0, sizeof(stack_t)); memset(&uss, 0, sizeof(stack_t));
if (!access_ok(VERIFY_READ, uss_ptr, sizeof(stack_ia32_t)) || if (!access_ok(VERIFY_READ, uss_ptr, sizeof(stack_ia32_t)))
__get_user(ptr, &uss_ptr->ss_sp) || return -EFAULT;
__get_user(uss.ss_flags, &uss_ptr->ss_flags) ||
__get_user(uss.ss_size, &uss_ptr->ss_size)) get_user_try {
get_user_ex(ptr, &uss_ptr->ss_sp);
get_user_ex(uss.ss_flags, &uss_ptr->ss_flags);
get_user_ex(uss.ss_size, &uss_ptr->ss_size);
} get_user_catch(err);
if (err)
return -EFAULT; return -EFAULT;
uss.ss_sp = compat_ptr(ptr); uss.ss_sp = compat_ptr(ptr);
} }
...@@ -161,10 +170,16 @@ asmlinkage long sys32_sigaltstack(const stack_ia32_t __user *uss_ptr, ...@@ -161,10 +170,16 @@ asmlinkage long sys32_sigaltstack(const stack_ia32_t __user *uss_ptr,
ret = do_sigaltstack(uss_ptr ? &uss : NULL, &uoss, regs->sp); ret = do_sigaltstack(uss_ptr ? &uss : NULL, &uoss, regs->sp);
set_fs(seg); set_fs(seg);
if (ret >= 0 && uoss_ptr) { if (ret >= 0 && uoss_ptr) {
if (!access_ok(VERIFY_WRITE, uoss_ptr, sizeof(stack_ia32_t)) || if (!access_ok(VERIFY_WRITE, uoss_ptr, sizeof(stack_ia32_t)))
__put_user(ptr_to_compat(uoss.ss_sp), &uoss_ptr->ss_sp) || return -EFAULT;
__put_user(uoss.ss_flags, &uoss_ptr->ss_flags) ||
__put_user(uoss.ss_size, &uoss_ptr->ss_size)) put_user_try {
put_user_ex(ptr_to_compat(uoss.ss_sp), &uoss_ptr->ss_sp);
put_user_ex(uoss.ss_flags, &uoss_ptr->ss_flags);
put_user_ex(uoss.ss_size, &uoss_ptr->ss_size);
} put_user_catch(err);
if (err)
ret = -EFAULT; ret = -EFAULT;
} }
return ret; return ret;
...@@ -173,75 +188,78 @@ asmlinkage long sys32_sigaltstack(const stack_ia32_t __user *uss_ptr, ...@@ -173,75 +188,78 @@ asmlinkage long sys32_sigaltstack(const stack_ia32_t __user *uss_ptr,
/* /*
* Do a signal return; undo the signal stack. * Do a signal return; undo the signal stack.
*/ */
#define loadsegment_gs(v) load_gs_index(v)
#define loadsegment_fs(v) loadsegment(fs, v)
#define loadsegment_ds(v) loadsegment(ds, v)
#define loadsegment_es(v) loadsegment(es, v)
#define get_user_seg(seg) ({ unsigned int v; savesegment(seg, v); v; })
#define set_user_seg(seg, v) loadsegment_##seg(v)
#define COPY(x) { \ #define COPY(x) { \
err |= __get_user(regs->x, &sc->x); \ get_user_ex(regs->x, &sc->x); \
} }
#define COPY_SEG_CPL3(seg) { \ #define GET_SEG(seg) ({ \
unsigned short tmp; \ unsigned short tmp; \
err |= __get_user(tmp, &sc->seg); \ get_user_ex(tmp, &sc->seg); \
regs->seg = tmp | 3; \ tmp; \
} })
#define COPY_SEG_CPL3(seg) do { \
regs->seg = GET_SEG(seg) | 3; \
} while (0)
#define RELOAD_SEG(seg) { \ #define RELOAD_SEG(seg) { \
unsigned int cur, pre; \ unsigned int pre = GET_SEG(seg); \
err |= __get_user(pre, &sc->seg); \ unsigned int cur = get_user_seg(seg); \
savesegment(seg, cur); \
pre |= 3; \ pre |= 3; \
if (pre != cur) \ if (pre != cur) \
loadsegment(seg, pre); \ set_user_seg(seg, pre); \
} }
static int ia32_restore_sigcontext(struct pt_regs *regs, static int ia32_restore_sigcontext(struct pt_regs *regs,
struct sigcontext_ia32 __user *sc, struct sigcontext_ia32 __user *sc,
unsigned int *pax) unsigned int *pax)
{ {
unsigned int tmpflags, gs, oldgs, err = 0; unsigned int tmpflags, err = 0;
void __user *buf; void __user *buf;
u32 tmp; u32 tmp;
/* Always make any pending restarted system calls return -EINTR */ /* Always make any pending restarted system calls return -EINTR */
current_thread_info()->restart_block.fn = do_no_restart_syscall; current_thread_info()->restart_block.fn = do_no_restart_syscall;
#if DEBUG_SIG get_user_try {
printk(KERN_DEBUG "SIG restore_sigcontext: " /*
"sc=%p err(%x) eip(%x) cs(%x) flg(%x)\n", * Reload fs and gs if they have changed in the signal
sc, sc->err, sc->ip, sc->cs, sc->flags); * handler. This does not handle long fs/gs base changes in
#endif * the handler, but does not clobber them at least in the
* normal case.
/* */
* Reload fs and gs if they have changed in the signal RELOAD_SEG(gs);
* handler. This does not handle long fs/gs base changes in RELOAD_SEG(fs);
* the handler, but does not clobber them at least in the RELOAD_SEG(ds);
* normal case. RELOAD_SEG(es);
*/
err |= __get_user(gs, &sc->gs); COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx);
gs |= 3; COPY(dx); COPY(cx); COPY(ip);
savesegment(gs, oldgs); /* Don't touch extended registers */
if (gs != oldgs)
load_gs_index(gs); COPY_SEG_CPL3(cs);
COPY_SEG_CPL3(ss);
RELOAD_SEG(fs);
RELOAD_SEG(ds); get_user_ex(tmpflags, &sc->flags);
RELOAD_SEG(es); regs->flags = (regs->flags & ~FIX_EFLAGS) | (tmpflags & FIX_EFLAGS);
/* disable syscall checks */
COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx); regs->orig_ax = -1;
COPY(dx); COPY(cx); COPY(ip);
/* Don't touch extended registers */ get_user_ex(tmp, &sc->fpstate);
buf = compat_ptr(tmp);
COPY_SEG_CPL3(cs); err |= restore_i387_xstate_ia32(buf);
COPY_SEG_CPL3(ss);
get_user_ex(*pax, &sc->ax);
err |= __get_user(tmpflags, &sc->flags); } get_user_catch(err);
regs->flags = (regs->flags & ~FIX_EFLAGS) | (tmpflags & FIX_EFLAGS);
/* disable syscall checks */
regs->orig_ax = -1;
err |= __get_user(tmp, &sc->fpstate);
buf = compat_ptr(tmp);
err |= restore_i387_xstate_ia32(buf);
err |= __get_user(*pax, &sc->ax);
return err; return err;
} }
...@@ -317,38 +335,36 @@ static int ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc, ...@@ -317,38 +335,36 @@ static int ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc,
void __user *fpstate, void __user *fpstate,
struct pt_regs *regs, unsigned int mask) struct pt_regs *regs, unsigned int mask)
{ {
int tmp, err = 0; int err = 0;
savesegment(gs, tmp); put_user_try {
err |= __put_user(tmp, (unsigned int __user *)&sc->gs); put_user_ex(get_user_seg(gs), (unsigned int __user *)&sc->gs);
savesegment(fs, tmp); put_user_ex(get_user_seg(fs), (unsigned int __user *)&sc->fs);
err |= __put_user(tmp, (unsigned int __user *)&sc->fs); put_user_ex(get_user_seg(ds), (unsigned int __user *)&sc->ds);
savesegment(ds, tmp); put_user_ex(get_user_seg(es), (unsigned int __user *)&sc->es);
err |= __put_user(tmp, (unsigned int __user *)&sc->ds);
savesegment(es, tmp); put_user_ex(regs->di, &sc->di);
err |= __put_user(tmp, (unsigned int __user *)&sc->es); put_user_ex(regs->si, &sc->si);
put_user_ex(regs->bp, &sc->bp);
err |= __put_user(regs->di, &sc->di); put_user_ex(regs->sp, &sc->sp);
err |= __put_user(regs->si, &sc->si); put_user_ex(regs->bx, &sc->bx);
err |= __put_user(regs->bp, &sc->bp); put_user_ex(regs->dx, &sc->dx);
err |= __put_user(regs->sp, &sc->sp); put_user_ex(regs->cx, &sc->cx);
err |= __put_user(regs->bx, &sc->bx); put_user_ex(regs->ax, &sc->ax);
err |= __put_user(regs->dx, &sc->dx); put_user_ex(current->thread.trap_no, &sc->trapno);
err |= __put_user(regs->cx, &sc->cx); put_user_ex(current->thread.error_code, &sc->err);
err |= __put_user(regs->ax, &sc->ax); put_user_ex(regs->ip, &sc->ip);
err |= __put_user(current->thread.trap_no, &sc->trapno); put_user_ex(regs->cs, (unsigned int __user *)&sc->cs);
err |= __put_user(current->thread.error_code, &sc->err); put_user_ex(regs->flags, &sc->flags);
err |= __put_user(regs->ip, &sc->ip); put_user_ex(regs->sp, &sc->sp_at_signal);
err |= __put_user(regs->cs, (unsigned int __user *)&sc->cs); put_user_ex(regs->ss, (unsigned int __user *)&sc->ss);
err |= __put_user(regs->flags, &sc->flags);
err |= __put_user(regs->sp, &sc->sp_at_signal); put_user_ex(ptr_to_compat(fpstate), &sc->fpstate);
err |= __put_user(regs->ss, (unsigned int __user *)&sc->ss);
/* non-iBCS2 extensions.. */
err |= __put_user(ptr_to_compat(fpstate), &sc->fpstate); put_user_ex(mask, &sc->oldmask);
put_user_ex(current->thread.cr2, &sc->cr2);
/* non-iBCS2 extensions.. */ } put_user_catch(err);
err |= __put_user(mask, &sc->oldmask);
err |= __put_user(current->thread.cr2, &sc->cr2);
return err; return err;
} }
...@@ -437,13 +453,17 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka, ...@@ -437,13 +453,17 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
else else
restorer = &frame->retcode; restorer = &frame->retcode;
} }
err |= __put_user(ptr_to_compat(restorer), &frame->pretcode);
/* put_user_try {
* These are actually not used anymore, but left because some put_user_ex(ptr_to_compat(restorer), &frame->pretcode);
* gdb versions depend on them as a marker.
*/ /*
err |= __put_user(*((u64 *)&code), (u64 *)frame->retcode); * These are actually not used anymore, but left because some
* gdb versions depend on them as a marker.
*/
put_user_ex(*((u64 *)&code), (u64 *)frame->retcode);
} put_user_catch(err);
if (err) if (err)
return -EFAULT; return -EFAULT;
...@@ -462,11 +482,6 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka, ...@@ -462,11 +482,6 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
regs->cs = __USER32_CS; regs->cs = __USER32_CS;
regs->ss = __USER32_DS; regs->ss = __USER32_DS;
#if DEBUG_SIG
printk(KERN_DEBUG "SIG deliver (%s:%d): sp=%p pc=%lx ra=%u\n",
current->comm, current->pid, frame, regs->ip, frame->pretcode);
#endif
return 0; return 0;
} }
...@@ -496,41 +511,40 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -496,41 +511,40 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
return -EFAULT; return -EFAULT;
err |= __put_user(sig, &frame->sig); put_user_try {
err |= __put_user(ptr_to_compat(&frame->info), &frame->pinfo); put_user_ex(sig, &frame->sig);
err |= __put_user(ptr_to_compat(&frame->uc), &frame->puc); put_user_ex(ptr_to_compat(&frame->info), &frame->pinfo);
err |= copy_siginfo_to_user32(&frame->info, info); put_user_ex(ptr_to_compat(&frame->uc), &frame->puc);
if (err) err |= copy_siginfo_to_user32(&frame->info, info);
return -EFAULT;
/* Create the ucontext. */ /* Create the ucontext. */
if (cpu_has_xsave) if (cpu_has_xsave)
err |= __put_user(UC_FP_XSTATE, &frame->uc.uc_flags); put_user_ex(UC_FP_XSTATE, &frame->uc.uc_flags);
else else
err |= __put_user(0, &frame->uc.uc_flags); put_user_ex(0, &frame->uc.uc_flags);
err |= __put_user(0, &frame->uc.uc_link); put_user_ex(0, &frame->uc.uc_link);
err |= __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp); put_user_ex(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
err |= __put_user(sas_ss_flags(regs->sp), put_user_ex(sas_ss_flags(regs->sp),
&frame->uc.uc_stack.ss_flags); &frame->uc.uc_stack.ss_flags);
err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size); put_user_ex(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
err |= ia32_setup_sigcontext(&frame->uc.uc_mcontext, fpstate, err |= ia32_setup_sigcontext(&frame->uc.uc_mcontext, fpstate,
regs, set->sig[0]); regs, set->sig[0]);
err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)); err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
if (err)
return -EFAULT; if (ka->sa.sa_flags & SA_RESTORER)
restorer = ka->sa.sa_restorer;
else
restorer = VDSO32_SYMBOL(current->mm->context.vdso,
rt_sigreturn);
put_user_ex(ptr_to_compat(restorer), &frame->pretcode);
/*
* Not actually used anymore, but left because some gdb
* versions need it.
*/
put_user_ex(*((u64 *)&code), (u64 *)frame->retcode);
} put_user_catch(err);
if (ka->sa.sa_flags & SA_RESTORER)
restorer = ka->sa.sa_restorer;
else
restorer = VDSO32_SYMBOL(current->mm->context.vdso,
rt_sigreturn);
err |= __put_user(ptr_to_compat(restorer), &frame->pretcode);
/*
* Not actually used anymore, but left because some gdb
* versions need it.
*/
err |= __put_user(*((u64 *)&code), (u64 *)frame->retcode);
if (err) if (err)
return -EFAULT; return -EFAULT;
...@@ -549,10 +563,5 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -549,10 +563,5 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
regs->cs = __USER32_CS; regs->cs = __USER32_CS;
regs->ss = __USER32_DS; regs->ss = __USER32_DS;
#if DEBUG_SIG
printk(KERN_DEBUG "SIG deliver (%s:%d): sp=%p pc=%lx ra=%u\n",
current->comm, current->pid, frame, regs->ip, frame->pretcode);
#endif
return 0; return 0;
} }
此差异已折叠。
...@@ -55,7 +55,7 @@ static inline void aout_dump_thread(struct pt_regs *regs, struct user *dump) ...@@ -55,7 +55,7 @@ static inline void aout_dump_thread(struct pt_regs *regs, struct user *dump)
dump->regs.ds = (u16)regs->ds; dump->regs.ds = (u16)regs->ds;
dump->regs.es = (u16)regs->es; dump->regs.es = (u16)regs->es;
dump->regs.fs = (u16)regs->fs; dump->regs.fs = (u16)regs->fs;
savesegment(gs, dump->regs.gs); dump->regs.gs = get_user_gs(regs);
dump->regs.orig_ax = regs->orig_ax; dump->regs.orig_ax = regs->orig_ax;
dump->regs.ip = regs->ip; dump->regs.ip = regs->ip;
dump->regs.cs = (u16)regs->cs; dump->regs.cs = (u16)regs->cs;
......
...@@ -102,9 +102,6 @@ static inline void disable_acpi(void) ...@@ -102,9 +102,6 @@ static inline void disable_acpi(void)
acpi_noirq = 1; acpi_noirq = 1;
} }
/* Fixmap pages to reserve for ACPI boot-time tables (see fixmap.h) */
#define FIX_ACPI_PAGES 4
extern int acpi_gsi_to_irq(u32 gsi, unsigned int *irq); extern int acpi_gsi_to_irq(u32 gsi, unsigned int *irq);
static inline void acpi_noirq_set(void) { acpi_noirq = 1; } static inline void acpi_noirq_set(void) { acpi_noirq = 1; }
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册