提交 2f3e4af4 编写于 作者: R Randy Dunlap

Merge branch 'docs-security' into docs-move

...@@ -192,10 +192,6 @@ kernel-docs.txt ...@@ -192,10 +192,6 @@ kernel-docs.txt
- listing of various WWW + books that document kernel internals. - listing of various WWW + books that document kernel internals.
kernel-parameters.txt kernel-parameters.txt
- summary listing of command line / boot prompt args for the kernel. - summary listing of command line / boot prompt args for the kernel.
keys-request-key.txt
- description of the kernel key request service.
keys.txt
- description of the kernel key retention service.
kobject.txt kobject.txt
- info of the kobject infrastructure of the Linux kernel. - info of the kobject infrastructure of the Linux kernel.
kprobes.txt kprobes.txt
...@@ -294,6 +290,8 @@ scheduler/ ...@@ -294,6 +290,8 @@ scheduler/
- directory with info on the scheduler. - directory with info on the scheduler.
scsi/ scsi/
- directory with info on Linux scsi support. - directory with info on Linux scsi support.
security/
- directory that contains security-related info
serial/ serial/
- directory with info on the low level serial API. - directory with info on the low level serial API.
serial-console.txt serial-console.txt
......
...@@ -47,8 +47,8 @@ request-key will find the first matching line and corresponding program. In ...@@ -47,8 +47,8 @@ request-key will find the first matching line and corresponding program. In
this case, /some/other/program will handle all uid lookups and this case, /some/other/program will handle all uid lookups and
/usr/sbin/nfs.idmap will handle gid, user, and group lookups. /usr/sbin/nfs.idmap will handle gid, user, and group lookups.
See <file:Documentation/keys-request-keys.txt> for more information about the See <file:Documentation/security/keys-request-keys.txt> for more information
request-key function. about the request-key function.
========= =========
......
...@@ -139,8 +139,8 @@ the key will be discarded and recreated when the data it holds has expired. ...@@ -139,8 +139,8 @@ the key will be discarded and recreated when the data it holds has expired.
dns_query() returns a copy of the value attached to the key, or an error if dns_query() returns a copy of the value attached to the key, or an error if
that is indicated instead. that is indicated instead.
See <file:Documentation/keys-request-key.txt> for further information about See <file:Documentation/security/keys-request-key.txt> for further
request-key function. information about request-key function.
========= =========
......
00-INDEX
- this file.
SELinux.txt
- how to get started with the SELinux security enhancement.
Smack.txt
- documentation on the Smack Linux Security Module.
apparmor.txt
- documentation on the AppArmor security extension.
credentials.txt
- documentation about credentials in Linux.
keys-request-key.txt
- description of the kernel key request service.
keys-trusted-encrypted.txt
- info on the Trusted and Encrypted keys in the kernel key ring service.
keys.txt
- description of the kernel key retention service.
tomoyo.txt
- documentation on the TOMOYO Linux Security Module.
...@@ -216,7 +216,7 @@ The Linux kernel supports the following types of credentials: ...@@ -216,7 +216,7 @@ The Linux kernel supports the following types of credentials:
When a process accesses a key, if not already present, it will normally be When a process accesses a key, if not already present, it will normally be
cached on one of these keyrings for future accesses to find. cached on one of these keyrings for future accesses to find.
For more information on using keys, see Documentation/keys.txt. For more information on using keys, see Documentation/security/keys.txt.
(5) LSM (5) LSM
......
...@@ -3,8 +3,8 @@ ...@@ -3,8 +3,8 @@
=================== ===================
The key request service is part of the key retention service (refer to The key request service is part of the key retention service (refer to
Documentation/keys.txt). This document explains more fully how the requesting Documentation/security/keys.txt). This document explains more fully how
algorithm works. the requesting algorithm works.
The process starts by either the kernel requesting a service by calling The process starts by either the kernel requesting a service by calling
request_key*(): request_key*():
......
...@@ -434,7 +434,7 @@ The main syscalls are: ...@@ -434,7 +434,7 @@ The main syscalls are:
/sbin/request-key will be invoked in an attempt to obtain a key. The /sbin/request-key will be invoked in an attempt to obtain a key. The
callout_info string will be passed as an argument to the program. callout_info string will be passed as an argument to the program.
See also Documentation/keys-request-key.txt. See also Documentation/security/keys-request-key.txt.
The keyctl syscall functions are: The keyctl syscall functions are:
...@@ -864,7 +864,7 @@ payload contents" for more information. ...@@ -864,7 +864,7 @@ payload contents" for more information.
If successful, the key will have been attached to the default keyring for If successful, the key will have been attached to the default keyring for
implicitly obtained request-key keys, as set by KEYCTL_SET_REQKEY_KEYRING. implicitly obtained request-key keys, as set by KEYCTL_SET_REQKEY_KEYRING.
See also Documentation/keys-request-key.txt. See also Documentation/security/keys-request-key.txt.
(*) To search for a key, passing auxiliary data to the upcaller, call: (*) To search for a key, passing auxiliary data to the upcaller, call:
......
...@@ -2813,38 +2813,19 @@ F: Documentation/gpio.txt ...@@ -2813,38 +2813,19 @@ F: Documentation/gpio.txt
F: drivers/gpio/ F: drivers/gpio/
F: include/linux/gpio* F: include/linux/gpio*
GRE DEMULTIPLEXER DRIVER
M: Dmitry Kozlov <xeb@mail.ru>
L: netdev@vger.kernel.org
S: Maintained
F: net/ipv4/gre.c
F: include/net/gre.h
GRETH 10/100/1G Ethernet MAC device driver GRETH 10/100/1G Ethernet MAC device driver
M: Kristoffer Glembo <kristoffer@gaisler.com> M: Kristoffer Glembo <kristoffer@gaisler.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Maintained S: Maintained
F: drivers/net/greth* F: drivers/net/greth*
HARD DRIVE ACTIVE PROTECTION SYSTEM (HDAPS) DRIVER
M: Frank Seidel <frank@f-seidel.de>
L: platform-driver-x86@vger.kernel.org
W: http://www.kernel.org/pub/linux/kernel/people/fseidel/hdaps/
S: Maintained
F: drivers/platform/x86/hdaps.c
HWPOISON MEMORY FAILURE HANDLING
M: Andi Kleen <andi@firstfloor.org>
L: linux-mm@kvack.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-mce-2.6.git hwpoison
S: Maintained
F: mm/memory-failure.c
F: mm/hwpoison-inject.c
HYPERVISOR VIRTUAL CONSOLE DRIVER
L: linuxppc-dev@lists.ozlabs.org
S: Odd Fixes
F: drivers/tty/hvc/
iSCSI BOOT FIRMWARE TABLE (iBFT) DRIVER
M: Peter Jones <pjones@redhat.com>
M: Konrad Rzeszutek Wilk <konrad@kernel.org>
S: Maintained
F: drivers/firmware/iscsi_ibft*
GSPCA FINEPIX SUBDRIVER GSPCA FINEPIX SUBDRIVER
M: Frank Zago <frank@zago.net> M: Frank Zago <frank@zago.net>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
...@@ -2895,6 +2876,26 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6.git ...@@ -2895,6 +2876,26 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6.git
S: Maintained S: Maintained
F: drivers/media/video/gspca/ F: drivers/media/video/gspca/
HARD DRIVE ACTIVE PROTECTION SYSTEM (HDAPS) DRIVER
M: Frank Seidel <frank@f-seidel.de>
L: platform-driver-x86@vger.kernel.org
W: http://www.kernel.org/pub/linux/kernel/people/fseidel/hdaps/
S: Maintained
F: drivers/platform/x86/hdaps.c
HWPOISON MEMORY FAILURE HANDLING
M: Andi Kleen <andi@firstfloor.org>
L: linux-mm@kvack.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-mce-2.6.git hwpoison
S: Maintained
F: mm/memory-failure.c
F: mm/hwpoison-inject.c
HYPERVISOR VIRTUAL CONSOLE DRIVER
L: linuxppc-dev@lists.ozlabs.org
S: Odd Fixes
F: drivers/tty/hvc/
HARDWARE MONITORING HARDWARE MONITORING
M: Jean Delvare <khali@linux-fr.org> M: Jean Delvare <khali@linux-fr.org>
M: Guenter Roeck <guenter.roeck@ericsson.com> M: Guenter Roeck <guenter.roeck@ericsson.com>
...@@ -3478,6 +3479,12 @@ F: Documentation/isapnp.txt ...@@ -3478,6 +3479,12 @@ F: Documentation/isapnp.txt
F: drivers/pnp/isapnp/ F: drivers/pnp/isapnp/
F: include/linux/isapnp.h F: include/linux/isapnp.h
iSCSI BOOT FIRMWARE TABLE (iBFT) DRIVER
M: Peter Jones <pjones@redhat.com>
M: Konrad Rzeszutek Wilk <konrad@kernel.org>
S: Maintained
F: drivers/firmware/iscsi_ibft*
ISCSI ISCSI
M: Mike Christie <michaelc@cs.wisc.edu> M: Mike Christie <michaelc@cs.wisc.edu>
L: open-iscsi@googlegroups.com L: open-iscsi@googlegroups.com
...@@ -3698,7 +3705,7 @@ KEYS/KEYRINGS: ...@@ -3698,7 +3705,7 @@ KEYS/KEYRINGS:
M: David Howells <dhowells@redhat.com> M: David Howells <dhowells@redhat.com>
L: keyrings@linux-nfs.org L: keyrings@linux-nfs.org
S: Maintained S: Maintained
F: Documentation/keys.txt F: Documentation/security/keys.txt
F: include/linux/key.h F: include/linux/key.h
F: include/linux/key-type.h F: include/linux/key-type.h
F: include/keys/ F: include/keys/
...@@ -3710,7 +3717,7 @@ M: Mimi Zohar <zohar@us.ibm.com> ...@@ -3710,7 +3717,7 @@ M: Mimi Zohar <zohar@us.ibm.com>
L: linux-security-module@vger.kernel.org L: linux-security-module@vger.kernel.org
L: keyrings@linux-nfs.org L: keyrings@linux-nfs.org
S: Supported S: Supported
F: Documentation/keys-trusted-encrypted.txt F: Documentation/security/keys-trusted-encrypted.txt
F: include/keys/trusted-type.h F: include/keys/trusted-type.h
F: security/keys/trusted.c F: security/keys/trusted.c
F: security/keys/trusted.h F: security/keys/trusted.h
...@@ -3721,7 +3728,7 @@ M: David Safford <safford@watson.ibm.com> ...@@ -3721,7 +3728,7 @@ M: David Safford <safford@watson.ibm.com>
L: linux-security-module@vger.kernel.org L: linux-security-module@vger.kernel.org
L: keyrings@linux-nfs.org L: keyrings@linux-nfs.org
S: Supported S: Supported
F: Documentation/keys-trusted-encrypted.txt F: Documentation/security/keys-trusted-encrypted.txt
F: include/keys/encrypted-type.h F: include/keys/encrypted-type.h
F: security/keys/encrypted.c F: security/keys/encrypted.c
F: security/keys/encrypted.h F: security/keys/encrypted.h
...@@ -4989,6 +4996,13 @@ F: Documentation/pps/ ...@@ -4989,6 +4996,13 @@ F: Documentation/pps/
F: drivers/pps/ F: drivers/pps/
F: include/linux/pps*.h F: include/linux/pps*.h
PPTP DRIVER
M: Dmitry Kozlov <xeb@mail.ru>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/pptp.c
W: http://sourceforge.net/projects/accel-pptp
PREEMPTIBLE KERNEL PREEMPTIBLE KERNEL
M: Robert Love <rml@tech9.net> M: Robert Love <rml@tech9.net>
L: kpreempt-tech@lists.sourceforge.net L: kpreempt-tech@lists.sourceforge.net
...@@ -7024,20 +7038,6 @@ M: "Maciej W. Rozycki" <macro@linux-mips.org> ...@@ -7024,20 +7038,6 @@ M: "Maciej W. Rozycki" <macro@linux-mips.org>
S: Maintained S: Maintained
F: drivers/tty/serial/zs.* F: drivers/tty/serial/zs.*
GRE DEMULTIPLEXER DRIVER
M: Dmitry Kozlov <xeb@mail.ru>
L: netdev@vger.kernel.org
S: Maintained
F: net/ipv4/gre.c
F: include/net/gre.h
PPTP DRIVER
M: Dmitry Kozlov <xeb@mail.ru>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/pptp.c
W: http://sourceforge.net/projects/accel-pptp
THE REST THE REST
M: Linus Torvalds <torvalds@linux-foundation.org> M: Linus Torvalds <torvalds@linux-foundation.org>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
......
VERSION = 2 VERSION = 2
PATCHLEVEL = 6 PATCHLEVEL = 6
SUBLEVEL = 39 SUBLEVEL = 39
EXTRAVERSION = -rc6 EXTRAVERSION =
NAME = Flesh-Eating Bats with Fangs NAME = Flesh-Eating Bats with Fangs
# *DOCUMENTATION* # *DOCUMENTATION*
......
...@@ -452,10 +452,14 @@ ...@@ -452,10 +452,14 @@
#define __NR_fanotify_init 494 #define __NR_fanotify_init 494
#define __NR_fanotify_mark 495 #define __NR_fanotify_mark 495
#define __NR_prlimit64 496 #define __NR_prlimit64 496
#define __NR_name_to_handle_at 497
#define __NR_open_by_handle_at 498
#define __NR_clock_adjtime 499
#define __NR_syncfs 500
#ifdef __KERNEL__ #ifdef __KERNEL__
#define NR_SYSCALLS 497 #define NR_SYSCALLS 501
#define __ARCH_WANT_IPC_PARSE_VERSION #define __ARCH_WANT_IPC_PARSE_VERSION
#define __ARCH_WANT_OLD_READDIR #define __ARCH_WANT_OLD_READDIR
......
...@@ -498,23 +498,27 @@ sys_call_table: ...@@ -498,23 +498,27 @@ sys_call_table:
.quad sys_ni_syscall /* sys_timerfd */ .quad sys_ni_syscall /* sys_timerfd */
.quad sys_eventfd .quad sys_eventfd
.quad sys_recvmmsg .quad sys_recvmmsg
.quad sys_fallocate /* 480 */ .quad sys_fallocate /* 480 */
.quad sys_timerfd_create .quad sys_timerfd_create
.quad sys_timerfd_settime .quad sys_timerfd_settime
.quad sys_timerfd_gettime .quad sys_timerfd_gettime
.quad sys_signalfd4 .quad sys_signalfd4
.quad sys_eventfd2 /* 485 */ .quad sys_eventfd2 /* 485 */
.quad sys_epoll_create1 .quad sys_epoll_create1
.quad sys_dup3 .quad sys_dup3
.quad sys_pipe2 .quad sys_pipe2
.quad sys_inotify_init1 .quad sys_inotify_init1
.quad sys_preadv /* 490 */ .quad sys_preadv /* 490 */
.quad sys_pwritev .quad sys_pwritev
.quad sys_rt_tgsigqueueinfo .quad sys_rt_tgsigqueueinfo
.quad sys_perf_event_open .quad sys_perf_event_open
.quad sys_fanotify_init .quad sys_fanotify_init
.quad sys_fanotify_mark /* 495 */ .quad sys_fanotify_mark /* 495 */
.quad sys_prlimit64 .quad sys_prlimit64
.quad sys_name_to_handle_at
.quad sys_open_by_handle_at
.quad sys_clock_adjtime
.quad sys_syncfs /* 500 */
.size sys_call_table, . - sys_call_table .size sys_call_table, . - sys_call_table
.type sys_call_table, @object .type sys_call_table, @object
......
...@@ -375,8 +375,7 @@ static struct clocksource clocksource_rpcc = { ...@@ -375,8 +375,7 @@ static struct clocksource clocksource_rpcc = {
static inline void register_rpcc_clocksource(long cycle_freq) static inline void register_rpcc_clocksource(long cycle_freq)
{ {
clocksource_calc_mult_shift(&clocksource_rpcc, cycle_freq, 4); clocksource_register_hz(&clocksource_rpcc, cycle_freq);
clocksource_register(&clocksource_rpcc);
} }
#else /* !CONFIG_SMP */ #else /* !CONFIG_SMP */
static inline void register_rpcc_clocksource(long cycle_freq) static inline void register_rpcc_clocksource(long cycle_freq)
......
...@@ -74,7 +74,7 @@ ZTEXTADDR := $(CONFIG_ZBOOT_ROM_TEXT) ...@@ -74,7 +74,7 @@ ZTEXTADDR := $(CONFIG_ZBOOT_ROM_TEXT)
ZBSSADDR := $(CONFIG_ZBOOT_ROM_BSS) ZBSSADDR := $(CONFIG_ZBOOT_ROM_BSS)
else else
ZTEXTADDR := 0 ZTEXTADDR := 0
ZBSSADDR := ALIGN(4) ZBSSADDR := ALIGN(8)
endif endif
SEDFLAGS = s/TEXT_START/$(ZTEXTADDR)/;s/BSS_START/$(ZBSSADDR)/ SEDFLAGS = s/TEXT_START/$(ZTEXTADDR)/;s/BSS_START/$(ZBSSADDR)/
......
...@@ -179,15 +179,14 @@ not_angel: ...@@ -179,15 +179,14 @@ not_angel:
bl cache_on bl cache_on
restart: adr r0, LC0 restart: adr r0, LC0
ldmia r0, {r1, r2, r3, r5, r6, r9, r11, r12} ldmia r0, {r1, r2, r3, r6, r9, r11, r12}
ldr sp, [r0, #32] ldr sp, [r0, #28]
/* /*
* We might be running at a different address. We need * We might be running at a different address. We need
* to fix up various pointers. * to fix up various pointers.
*/ */
sub r0, r0, r1 @ calculate the delta offset sub r0, r0, r1 @ calculate the delta offset
add r5, r5, r0 @ _start
add r6, r6, r0 @ _edata add r6, r6, r0 @ _edata
#ifndef CONFIG_ZBOOT_ROM #ifndef CONFIG_ZBOOT_ROM
...@@ -206,31 +205,40 @@ restart: adr r0, LC0 ...@@ -206,31 +205,40 @@ restart: adr r0, LC0
/* /*
* Check to see if we will overwrite ourselves. * Check to see if we will overwrite ourselves.
* r4 = final kernel address * r4 = final kernel address
* r5 = start of this image
* r9 = size of decompressed image * r9 = size of decompressed image
* r10 = end of this image, including bss/stack/malloc space if non XIP * r10 = end of this image, including bss/stack/malloc space if non XIP
* We basically want: * We basically want:
* r4 >= r10 -> OK * r4 - 16k page directory >= r10 -> OK
* r4 + image length <= r5 -> OK * r4 + image length <= current position (pc) -> OK
*/ */
add r10, r10, #16384
cmp r4, r10 cmp r4, r10
bhs wont_overwrite bhs wont_overwrite
add r10, r4, r9 add r10, r4, r9
cmp r10, r5 ARM( cmp r10, pc )
THUMB( mov lr, pc )
THUMB( cmp r10, lr )
bls wont_overwrite bls wont_overwrite
/* /*
* Relocate ourselves past the end of the decompressed kernel. * Relocate ourselves past the end of the decompressed kernel.
* r5 = start of this image
* r6 = _edata * r6 = _edata
* r10 = end of the decompressed kernel * r10 = end of the decompressed kernel
* Because we always copy ahead, we need to do it from the end and go * Because we always copy ahead, we need to do it from the end and go
* backward in case the source and destination overlap. * backward in case the source and destination overlap.
*/ */
/* Round up to next 256-byte boundary. */ /*
add r10, r10, #256 * Bump to the next 256-byte boundary with the size of
* the relocation code added. This avoids overwriting
* ourself when the offset is small.
*/
add r10, r10, #((reloc_code_end - restart + 256) & ~255)
bic r10, r10, #255 bic r10, r10, #255
/* Get start of code we want to copy and align it down. */
adr r5, restart
bic r5, r5, #31
sub r9, r6, r5 @ size to copy sub r9, r6, r5 @ size to copy
add r9, r9, #31 @ rounded up to a multiple add r9, r9, #31 @ rounded up to a multiple
bic r9, r9, #31 @ ... of 32 bytes bic r9, r9, #31 @ ... of 32 bytes
...@@ -245,6 +253,11 @@ restart: adr r0, LC0 ...@@ -245,6 +253,11 @@ restart: adr r0, LC0
/* Preserve offset to relocated code. */ /* Preserve offset to relocated code. */
sub r6, r9, r6 sub r6, r9, r6
#ifndef CONFIG_ZBOOT_ROM
/* cache_clean_flush may use the stack, so relocate it */
add sp, sp, r6
#endif
bl cache_clean_flush bl cache_clean_flush
adr r0, BSYM(restart) adr r0, BSYM(restart)
...@@ -333,7 +346,6 @@ not_relocated: mov r0, #0 ...@@ -333,7 +346,6 @@ not_relocated: mov r0, #0
LC0: .word LC0 @ r1 LC0: .word LC0 @ r1
.word __bss_start @ r2 .word __bss_start @ r2
.word _end @ r3 .word _end @ r3
.word _start @ r5
.word _edata @ r6 .word _edata @ r6
.word _image_size @ r9 .word _image_size @ r9
.word _got_start @ r11 .word _got_start @ r11
...@@ -1062,6 +1074,7 @@ memdump: mov r12, r0 ...@@ -1062,6 +1074,7 @@ memdump: mov r12, r0
#endif #endif
.ltorg .ltorg
reloc_code_end:
.align .align
.section ".stack", "aw", %nobits .section ".stack", "aw", %nobits
......
...@@ -54,6 +54,7 @@ SECTIONS ...@@ -54,6 +54,7 @@ SECTIONS
.bss : { *(.bss) } .bss : { *(.bss) }
_end = .; _end = .;
. = ALIGN(8); /* the stack must be 64-bit aligned */
.stack : { *(.stack) } .stack : { *(.stack) }
.stab 0 : { *(.stab) } .stab 0 : { *(.stab) }
......
...@@ -159,7 +159,7 @@ extern unsigned int user_debug; ...@@ -159,7 +159,7 @@ extern unsigned int user_debug;
#include <mach/barriers.h> #include <mach/barriers.h>
#elif defined(CONFIG_ARM_DMA_MEM_BUFFERABLE) || defined(CONFIG_SMP) #elif defined(CONFIG_ARM_DMA_MEM_BUFFERABLE) || defined(CONFIG_SMP)
#define mb() do { dsb(); outer_sync(); } while (0) #define mb() do { dsb(); outer_sync(); } while (0)
#define rmb() dmb() #define rmb() dsb()
#define wmb() mb() #define wmb() mb()
#else #else
#include <asm/memory.h> #include <asm/memory.h>
......
...@@ -767,12 +767,20 @@ long arch_ptrace(struct task_struct *child, long request, ...@@ -767,12 +767,20 @@ long arch_ptrace(struct task_struct *child, long request,
#ifdef CONFIG_HAVE_HW_BREAKPOINT #ifdef CONFIG_HAVE_HW_BREAKPOINT
case PTRACE_GETHBPREGS: case PTRACE_GETHBPREGS:
if (ptrace_get_breakpoints(child) < 0)
return -ESRCH;
ret = ptrace_gethbpregs(child, addr, ret = ptrace_gethbpregs(child, addr,
(unsigned long __user *)data); (unsigned long __user *)data);
ptrace_put_breakpoints(child);
break; break;
case PTRACE_SETHBPREGS: case PTRACE_SETHBPREGS:
if (ptrace_get_breakpoints(child) < 0)
return -ESRCH;
ret = ptrace_sethbpregs(child, addr, ret = ptrace_sethbpregs(child, addr,
(unsigned long __user *)data); (unsigned long __user *)data);
ptrace_put_breakpoints(child);
break; break;
#endif #endif
......
...@@ -597,45 +597,19 @@ setup_rt_frame(int usig, struct k_sigaction *ka, siginfo_t *info, ...@@ -597,45 +597,19 @@ setup_rt_frame(int usig, struct k_sigaction *ka, siginfo_t *info,
return err; return err;
} }
static inline void setup_syscall_restart(struct pt_regs *regs)
{
regs->ARM_r0 = regs->ARM_ORIG_r0;
regs->ARM_pc -= thumb_mode(regs) ? 2 : 4;
}
/* /*
* OK, we're invoking a handler * OK, we're invoking a handler
*/ */
static int static int
handle_signal(unsigned long sig, struct k_sigaction *ka, handle_signal(unsigned long sig, struct k_sigaction *ka,
siginfo_t *info, sigset_t *oldset, siginfo_t *info, sigset_t *oldset,
struct pt_regs * regs, int syscall) struct pt_regs * regs)
{ {
struct thread_info *thread = current_thread_info(); struct thread_info *thread = current_thread_info();
struct task_struct *tsk = current; struct task_struct *tsk = current;
int usig = sig; int usig = sig;
int ret; int ret;
/*
* If we were from a system call, check for system call restarting...
*/
if (syscall) {
switch (regs->ARM_r0) {
case -ERESTART_RESTARTBLOCK:
case -ERESTARTNOHAND:
regs->ARM_r0 = -EINTR;
break;
case -ERESTARTSYS:
if (!(ka->sa.sa_flags & SA_RESTART)) {
regs->ARM_r0 = -EINTR;
break;
}
/* fallthrough */
case -ERESTARTNOINTR:
setup_syscall_restart(regs);
}
}
/* /*
* translate the signal * translate the signal
*/ */
...@@ -685,6 +659,7 @@ handle_signal(unsigned long sig, struct k_sigaction *ka, ...@@ -685,6 +659,7 @@ handle_signal(unsigned long sig, struct k_sigaction *ka,
*/ */
static void do_signal(struct pt_regs *regs, int syscall) static void do_signal(struct pt_regs *regs, int syscall)
{ {
unsigned int retval = 0, continue_addr = 0, restart_addr = 0;
struct k_sigaction ka; struct k_sigaction ka;
siginfo_t info; siginfo_t info;
int signr; int signr;
...@@ -698,18 +673,61 @@ static void do_signal(struct pt_regs *regs, int syscall) ...@@ -698,18 +673,61 @@ static void do_signal(struct pt_regs *regs, int syscall)
if (!user_mode(regs)) if (!user_mode(regs))
return; return;
/*
* If we were from a system call, check for system call restarting...
*/
if (syscall) {
continue_addr = regs->ARM_pc;
restart_addr = continue_addr - (thumb_mode(regs) ? 2 : 4);
retval = regs->ARM_r0;
/*
* Prepare for system call restart. We do this here so that a
* debugger will see the already changed PSW.
*/
switch (retval) {
case -ERESTARTNOHAND:
case -ERESTARTSYS:
case -ERESTARTNOINTR:
regs->ARM_r0 = regs->ARM_ORIG_r0;
regs->ARM_pc = restart_addr;
break;
case -ERESTART_RESTARTBLOCK:
regs->ARM_r0 = -EINTR;
break;
}
}
if (try_to_freeze()) if (try_to_freeze())
goto no_signal; goto no_signal;
/*
* Get the signal to deliver. When running under ptrace, at this
* point the debugger may change all our registers ...
*/
signr = get_signal_to_deliver(&info, &ka, regs, NULL); signr = get_signal_to_deliver(&info, &ka, regs, NULL);
if (signr > 0) { if (signr > 0) {
sigset_t *oldset; sigset_t *oldset;
/*
* Depending on the signal settings we may need to revert the
* decision to restart the system call. But skip this if a
* debugger has chosen to restart at a different PC.
*/
if (regs->ARM_pc == restart_addr) {
if (retval == -ERESTARTNOHAND
|| (retval == -ERESTARTSYS
&& !(ka.sa.sa_flags & SA_RESTART))) {
regs->ARM_r0 = -EINTR;
regs->ARM_pc = continue_addr;
}
}
if (test_thread_flag(TIF_RESTORE_SIGMASK)) if (test_thread_flag(TIF_RESTORE_SIGMASK))
oldset = &current->saved_sigmask; oldset = &current->saved_sigmask;
else else
oldset = &current->blocked; oldset = &current->blocked;
if (handle_signal(signr, &ka, &info, oldset, regs, syscall) == 0) { if (handle_signal(signr, &ka, &info, oldset, regs) == 0) {
/* /*
* A signal was successfully delivered; the saved * A signal was successfully delivered; the saved
* sigmask will have been stored in the signal frame, * sigmask will have been stored in the signal frame,
...@@ -723,11 +741,14 @@ static void do_signal(struct pt_regs *regs, int syscall) ...@@ -723,11 +741,14 @@ static void do_signal(struct pt_regs *regs, int syscall)
} }
no_signal: no_signal:
/*
* No signal to deliver to the process - restart the syscall.
*/
if (syscall) { if (syscall) {
if (regs->ARM_r0 == -ERESTART_RESTARTBLOCK) { /*
* Handle restarting a different system call. As above,
* if a debugger has chosen to restart at a different PC,
* ignore the restart.
*/
if (retval == -ERESTART_RESTARTBLOCK
&& regs->ARM_pc == continue_addr) {
if (thumb_mode(regs)) { if (thumb_mode(regs)) {
regs->ARM_r7 = __NR_restart_syscall - __NR_SYSCALL_BASE; regs->ARM_r7 = __NR_restart_syscall - __NR_SYSCALL_BASE;
regs->ARM_pc -= 2; regs->ARM_pc -= 2;
...@@ -750,11 +771,6 @@ static void do_signal(struct pt_regs *regs, int syscall) ...@@ -750,11 +771,6 @@ static void do_signal(struct pt_regs *regs, int syscall)
#endif #endif
} }
} }
if (regs->ARM_r0 == -ERESTARTNOHAND ||
regs->ARM_r0 == -ERESTARTSYS ||
regs->ARM_r0 == -ERESTARTNOINTR) {
setup_syscall_restart(regs);
}
/* If there's no signal to deliver, we just put the saved sigmask /* If there's no signal to deliver, we just put the saved sigmask
* back. * back.
......
...@@ -115,6 +115,7 @@ int omap3_core_dpll_m2_set_rate(struct clk *clk, unsigned long rate) ...@@ -115,6 +115,7 @@ int omap3_core_dpll_m2_set_rate(struct clk *clk, unsigned long rate)
sdrc_cs0->rfr_ctrl, sdrc_cs0->actim_ctrla, sdrc_cs0->rfr_ctrl, sdrc_cs0->actim_ctrla,
sdrc_cs0->actim_ctrlb, sdrc_cs0->mr, sdrc_cs0->actim_ctrlb, sdrc_cs0->mr,
0, 0, 0, 0); 0, 0, 0, 0);
clk->rate = rate;
return 0; return 0;
} }
......
...@@ -4,5 +4,5 @@ ...@@ -4,5 +4,5 @@
* operation to deadlock the system. * operation to deadlock the system.
*/ */
#define mb() dsb() #define mb() dsb()
#define rmb() dmb() #define rmb() dsb()
#define wmb() mb() #define wmb() mb()
...@@ -23,7 +23,7 @@ ...@@ -23,7 +23,7 @@
#include <asm/outercache.h> #include <asm/outercache.h>
#define rmb() dmb() #define rmb() dsb()
#define wmb() do { dsb(); outer_sync(); } while (0) #define wmb() do { dsb(); outer_sync(); } while (0)
#define mb() wmb() #define mb() wmb()
......
...@@ -392,7 +392,7 @@ free_memmap(unsigned long start_pfn, unsigned long end_pfn) ...@@ -392,7 +392,7 @@ free_memmap(unsigned long start_pfn, unsigned long end_pfn)
* Convert start_pfn/end_pfn to a struct page pointer. * Convert start_pfn/end_pfn to a struct page pointer.
*/ */
start_pg = pfn_to_page(start_pfn - 1) + 1; start_pg = pfn_to_page(start_pfn - 1) + 1;
end_pg = pfn_to_page(end_pfn); end_pg = pfn_to_page(end_pfn - 1) + 1;
/* /*
* Convert to physical addresses, and * Convert to physical addresses, and
...@@ -426,6 +426,14 @@ static void __init free_unused_memmap(struct meminfo *mi) ...@@ -426,6 +426,14 @@ static void __init free_unused_memmap(struct meminfo *mi)
bank_start = bank_pfn_start(bank); bank_start = bank_pfn_start(bank);
#ifdef CONFIG_SPARSEMEM
/*
* Take care not to free memmap entries that don't exist
* due to SPARSEMEM sections which aren't present.
*/
bank_start = min(bank_start,
ALIGN(prev_bank_end, PAGES_PER_SECTION));
#endif
/* /*
* If we had a previous bank, and there is a space * If we had a previous bank, and there is a space
* between the current bank and the previous, free it. * between the current bank and the previous, free it.
...@@ -440,6 +448,12 @@ static void __init free_unused_memmap(struct meminfo *mi) ...@@ -440,6 +448,12 @@ static void __init free_unused_memmap(struct meminfo *mi)
*/ */
prev_bank_end = ALIGN(bank_pfn_end(bank), MAX_ORDER_NR_PAGES); prev_bank_end = ALIGN(bank_pfn_end(bank), MAX_ORDER_NR_PAGES);
} }
#ifdef CONFIG_SPARSEMEM
if (!IS_ALIGNED(prev_bank_end, PAGES_PER_SECTION))
free_memmap(prev_bank_end,
ALIGN(prev_bank_end, PAGES_PER_SECTION));
#endif
} }
static void __init free_highpages(void) static void __init free_highpages(void)
......
...@@ -793,6 +793,8 @@ static irqreturn_t iommu_fault_handler(int irq, void *data) ...@@ -793,6 +793,8 @@ static irqreturn_t iommu_fault_handler(int irq, void *data)
clk_enable(obj->clk); clk_enable(obj->clk);
errs = iommu_report_fault(obj, &da); errs = iommu_report_fault(obj, &da);
clk_disable(obj->clk); clk_disable(obj->clk);
if (errs == 0)
return IRQ_HANDLED;
/* Fault callback or TLB/PTE Dynamic loading */ /* Fault callback or TLB/PTE Dynamic loading */
if (obj->isr && !obj->isr(obj, da, errs, obj->isr_priv)) if (obj->isr && !obj->isr(obj, da, errs, obj->isr_priv))
......
...@@ -997,9 +997,6 @@ config IRQ_GT641XX ...@@ -997,9 +997,6 @@ config IRQ_GT641XX
config IRQ_GIC config IRQ_GIC
bool bool
config IRQ_CPU_OCTEON
bool
config MIPS_BOARDS_GEN config MIPS_BOARDS_GEN
bool bool
...@@ -1359,8 +1356,6 @@ config CPU_SB1 ...@@ -1359,8 +1356,6 @@ config CPU_SB1
config CPU_CAVIUM_OCTEON config CPU_CAVIUM_OCTEON
bool "Cavium Octeon processor" bool "Cavium Octeon processor"
depends on SYS_HAS_CPU_CAVIUM_OCTEON depends on SYS_HAS_CPU_CAVIUM_OCTEON
select IRQ_CPU
select IRQ_CPU_OCTEON
select CPU_HAS_PREFETCH select CPU_HAS_PREFETCH
select CPU_SUPPORTS_64BIT_KERNEL select CPU_SUPPORTS_64BIT_KERNEL
select SYS_SUPPORTS_SMP select SYS_SUPPORTS_SMP
......
...@@ -127,13 +127,10 @@ const char *get_system_type(void) ...@@ -127,13 +127,10 @@ const char *get_system_type(void)
void __init board_setup(void) void __init board_setup(void)
{ {
unsigned long bcsr1, bcsr2; unsigned long bcsr1, bcsr2;
u32 pin_func;
bcsr1 = DB1000_BCSR_PHYS_ADDR; bcsr1 = DB1000_BCSR_PHYS_ADDR;
bcsr2 = DB1000_BCSR_PHYS_ADDR + DB1000_BCSR_HEXLED_OFS; bcsr2 = DB1000_BCSR_PHYS_ADDR + DB1000_BCSR_HEXLED_OFS;
pin_func = 0;
#ifdef CONFIG_MIPS_DB1000 #ifdef CONFIG_MIPS_DB1000
printk(KERN_INFO "AMD Alchemy Au1000/Db1000 Board\n"); printk(KERN_INFO "AMD Alchemy Au1000/Db1000 Board\n");
#endif #endif
...@@ -164,12 +161,16 @@ void __init board_setup(void) ...@@ -164,12 +161,16 @@ void __init board_setup(void)
/* Not valid for Au1550 */ /* Not valid for Au1550 */
#if defined(CONFIG_IRDA) && \ #if defined(CONFIG_IRDA) && \
(defined(CONFIG_SOC_AU1000) || defined(CONFIG_SOC_AU1100)) (defined(CONFIG_SOC_AU1000) || defined(CONFIG_SOC_AU1100))
/* Set IRFIRSEL instead of GPIO15 */ {
pin_func = au_readl(SYS_PINFUNC) | SYS_PF_IRF; u32 pin_func;
au_writel(pin_func, SYS_PINFUNC);
/* Power off until the driver is in use */ /* Set IRFIRSEL instead of GPIO15 */
bcsr_mod(BCSR_RESETS, BCSR_RESETS_IRDA_MODE_MASK, pin_func = au_readl(SYS_PINFUNC) | SYS_PF_IRF;
BCSR_RESETS_IRDA_MODE_OFF); au_writel(pin_func, SYS_PINFUNC);
/* Power off until the driver is in use */
bcsr_mod(BCSR_RESETS, BCSR_RESETS_IRDA_MODE_MASK,
BCSR_RESETS_IRDA_MODE_OFF);
}
#endif #endif
bcsr_write(BCSR_PCMCIA, 0); /* turn off PCMCIA power */ bcsr_write(BCSR_PCMCIA, 0); /* turn off PCMCIA power */
...@@ -177,31 +178,35 @@ void __init board_setup(void) ...@@ -177,31 +178,35 @@ void __init board_setup(void)
alchemy_gpio1_input_enable(); alchemy_gpio1_input_enable();
#ifdef CONFIG_MIPS_MIRAGE #ifdef CONFIG_MIPS_MIRAGE
/* GPIO[20] is output */ {
alchemy_gpio_direction_output(20, 0); u32 pin_func;
/* Set GPIO[210:208] instead of SSI_0 */ /* GPIO[20] is output */
pin_func = au_readl(SYS_PINFUNC) | SYS_PF_S0; alchemy_gpio_direction_output(20, 0);
/* Set GPIO[215:211] for LEDs */ /* Set GPIO[210:208] instead of SSI_0 */
pin_func |= 5 << 2; pin_func = au_readl(SYS_PINFUNC) | SYS_PF_S0;
/* Set GPIO[214:213] for more LEDs */ /* Set GPIO[215:211] for LEDs */
pin_func |= 5 << 12; pin_func |= 5 << 2;
/* Set GPIO[207:200] instead of PCMCIA/LCD */ /* Set GPIO[214:213] for more LEDs */
pin_func |= SYS_PF_LCD | SYS_PF_PC; pin_func |= 5 << 12;
au_writel(pin_func, SYS_PINFUNC);
/* /* Set GPIO[207:200] instead of PCMCIA/LCD */
* Enable speaker amplifier. This should pin_func |= SYS_PF_LCD | SYS_PF_PC;
* be part of the audio driver. au_writel(pin_func, SYS_PINFUNC);
*/
alchemy_gpio_direction_output(209, 1);
pm_power_off = mirage_power_off; /*
_machine_halt = mirage_power_off; * Enable speaker amplifier. This should
_machine_restart = (void(*)(char *))mips_softreset; * be part of the audio driver.
*/
alchemy_gpio_direction_output(209, 1);
pm_power_off = mirage_power_off;
_machine_halt = mirage_power_off;
_machine_restart = (void(*)(char *))mips_softreset;
}
#endif #endif
#ifdef CONFIG_MIPS_BOSPORUS #ifdef CONFIG_MIPS_BOSPORUS
......
...@@ -51,10 +51,9 @@ void __init prom_init(void) ...@@ -51,10 +51,9 @@ void __init prom_init(void)
prom_init_cmdline(); prom_init_cmdline();
memsize_str = prom_getenv("memsize"); memsize_str = prom_getenv("memsize");
if (!memsize_str) if (!memsize_str || strict_strtoul(memsize_str, 0, &memsize))
memsize = 0x04000000; memsize = 0x04000000;
else
strict_strtoul(memsize_str, 0, &memsize);
add_memory_region(0, memsize, BOOT_MEM_RAM); add_memory_region(0, memsize, BOOT_MEM_RAM);
} }
......
...@@ -325,9 +325,7 @@ int __init ar7_gpio_init(void) ...@@ -325,9 +325,7 @@ int __init ar7_gpio_init(void)
size = 0x1f; size = 0x1f;
} }
gpch->regs = ioremap_nocache(AR7_REGS_GPIO, gpch->regs = ioremap_nocache(AR7_REGS_GPIO, size);
AR7_REGS_GPIO + 0x10);
if (!gpch->regs) { if (!gpch->regs) {
printk(KERN_ERR "%s: failed to ioremap regs\n", printk(KERN_ERR "%s: failed to ioremap regs\n",
gpch->chip.label); gpch->chip.label);
......
...@@ -16,8 +16,8 @@ ...@@ -16,8 +16,8 @@
int main(int argc, char *argv[]) int main(int argc, char *argv[])
{ {
unsigned long long vmlinux_size, vmlinux_load_addr, vmlinuz_load_addr;
struct stat sb; struct stat sb;
uint64_t vmlinux_size, vmlinux_load_addr, vmlinuz_load_addr;
if (argc != 3) { if (argc != 3) {
fprintf(stderr, "Usage: %s <pathname> <vmlinux_load_addr>\n", fprintf(stderr, "Usage: %s <pathname> <vmlinux_load_addr>\n",
......
config CAVIUM_OCTEON_SPECIFIC_OPTIONS if CPU_CAVIUM_OCTEON
bool "Enable Octeon specific options"
depends on CPU_CAVIUM_OCTEON
default "y"
config CAVIUM_CN63XXP1 config CAVIUM_CN63XXP1
bool "Enable CN63XXP1 errata worarounds" bool "Enable CN63XXP1 errata worarounds"
depends on CAVIUM_OCTEON_SPECIFIC_OPTIONS
default "n" default "n"
help help
The CN63XXP1 chip requires build time workarounds to The CN63XXP1 chip requires build time workarounds to
...@@ -16,7 +12,6 @@ config CAVIUM_CN63XXP1 ...@@ -16,7 +12,6 @@ config CAVIUM_CN63XXP1
config CAVIUM_OCTEON_2ND_KERNEL config CAVIUM_OCTEON_2ND_KERNEL
bool "Build the kernel to be used as a 2nd kernel on the same chip" bool "Build the kernel to be used as a 2nd kernel on the same chip"
depends on CAVIUM_OCTEON_SPECIFIC_OPTIONS
default "n" default "n"
help help
This option configures this kernel to be linked at a different This option configures this kernel to be linked at a different
...@@ -26,7 +21,6 @@ config CAVIUM_OCTEON_2ND_KERNEL ...@@ -26,7 +21,6 @@ config CAVIUM_OCTEON_2ND_KERNEL
config CAVIUM_OCTEON_HW_FIX_UNALIGNED config CAVIUM_OCTEON_HW_FIX_UNALIGNED
bool "Enable hardware fixups of unaligned loads and stores" bool "Enable hardware fixups of unaligned loads and stores"
depends on CAVIUM_OCTEON_SPECIFIC_OPTIONS
default "y" default "y"
help help
Configure the Octeon hardware to automatically fix unaligned loads Configure the Octeon hardware to automatically fix unaligned loads
...@@ -38,7 +32,6 @@ config CAVIUM_OCTEON_HW_FIX_UNALIGNED ...@@ -38,7 +32,6 @@ config CAVIUM_OCTEON_HW_FIX_UNALIGNED
config CAVIUM_OCTEON_CVMSEG_SIZE config CAVIUM_OCTEON_CVMSEG_SIZE
int "Number of L1 cache lines reserved for CVMSEG memory" int "Number of L1 cache lines reserved for CVMSEG memory"
depends on CAVIUM_OCTEON_SPECIFIC_OPTIONS
range 0 54 range 0 54
default 1 default 1
help help
...@@ -50,7 +43,6 @@ config CAVIUM_OCTEON_CVMSEG_SIZE ...@@ -50,7 +43,6 @@ config CAVIUM_OCTEON_CVMSEG_SIZE
config CAVIUM_OCTEON_LOCK_L2 config CAVIUM_OCTEON_LOCK_L2
bool "Lock often used kernel code in the L2" bool "Lock often used kernel code in the L2"
depends on CAVIUM_OCTEON_SPECIFIC_OPTIONS
default "y" default "y"
help help
Enable locking parts of the kernel into the L2 cache. Enable locking parts of the kernel into the L2 cache.
...@@ -93,7 +85,6 @@ config CAVIUM_OCTEON_LOCK_L2_MEMCPY ...@@ -93,7 +85,6 @@ config CAVIUM_OCTEON_LOCK_L2_MEMCPY
config ARCH_SPARSEMEM_ENABLE config ARCH_SPARSEMEM_ENABLE
def_bool y def_bool y
select SPARSEMEM_STATIC select SPARSEMEM_STATIC
depends on CPU_CAVIUM_OCTEON
config CAVIUM_OCTEON_HELPER config CAVIUM_OCTEON_HELPER
def_bool y def_bool y
...@@ -107,6 +98,8 @@ config NEED_SG_DMA_LENGTH ...@@ -107,6 +98,8 @@ config NEED_SG_DMA_LENGTH
config SWIOTLB config SWIOTLB
def_bool y def_bool y
depends on CPU_CAVIUM_OCTEON
select IOMMU_HELPER select IOMMU_HELPER
select NEED_SG_DMA_LENGTH select NEED_SG_DMA_LENGTH
endif # CPU_CAVIUM_OCTEON
...@@ -17,6 +17,6 @@ ...@@ -17,6 +17,6 @@
#define SMP_CACHE_SHIFT L1_CACHE_SHIFT #define SMP_CACHE_SHIFT L1_CACHE_SHIFT
#define SMP_CACHE_BYTES L1_CACHE_BYTES #define SMP_CACHE_BYTES L1_CACHE_BYTES
#define __read_mostly __attribute__((__section__(".data.read_mostly"))) #define __read_mostly __attribute__((__section__(".data..read_mostly")))
#endif /* _ASM_CACHE_H */ #endif /* _ASM_CACHE_H */
...@@ -14,6 +14,9 @@ ...@@ -14,6 +14,9 @@
#ifndef __ASM_CEVT_R4K_H #ifndef __ASM_CEVT_R4K_H
#define __ASM_CEVT_R4K_H #define __ASM_CEVT_R4K_H
#include <linux/clockchips.h>
#include <asm/time.h>
DECLARE_PER_CPU(struct clock_event_device, mips_clockevent_device); DECLARE_PER_CPU(struct clock_event_device, mips_clockevent_device);
void mips_event_handler(struct clock_event_device *dev); void mips_event_handler(struct clock_event_device *dev);
......
...@@ -5,7 +5,9 @@ ...@@ -5,7 +5,9 @@
#include <asm/cache.h> #include <asm/cache.h>
#include <asm-generic/dma-coherent.h> #include <asm-generic/dma-coherent.h>
#ifndef CONFIG_SGI_IP27 /* Kludge to fix 2.6.39 build for IP27 */
#include <dma-coherence.h> #include <dma-coherence.h>
#endif
extern struct dma_map_ops *mips_dma_map_ops; extern struct dma_map_ops *mips_dma_map_ops;
......
...@@ -70,6 +70,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, ...@@ -70,6 +70,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, static inline void huge_ptep_clear_flush(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep) unsigned long addr, pte_t *ptep)
{ {
flush_tlb_mm(vma->vm_mm);
} }
static inline int huge_pte_none(pte_t pte) static inline int huge_pte_none(pte_t pte)
......
...@@ -88,7 +88,7 @@ struct bcm_tag { ...@@ -88,7 +88,7 @@ struct bcm_tag {
char kernel_crc[CRC_LEN]; char kernel_crc[CRC_LEN];
/* 228-235: Unused at present */ /* 228-235: Unused at present */
char reserved1[8]; char reserved1[8];
/* 236-239: CRC32 of header excluding tagVersion */ /* 236-239: CRC32 of header excluding last 20 bytes */
char header_crc[CRC_LEN]; char header_crc[CRC_LEN];
/* 240-255: Unused at present */ /* 240-255: Unused at present */
char reserved2[16]; char reserved2[16];
......
...@@ -211,7 +211,7 @@ EXPORT_SYMBOL(vdma_free); ...@@ -211,7 +211,7 @@ EXPORT_SYMBOL(vdma_free);
*/ */
int vdma_remap(unsigned long laddr, unsigned long paddr, unsigned long size) int vdma_remap(unsigned long laddr, unsigned long paddr, unsigned long size)
{ {
int first, pages, npages; int first, pages;
if (laddr > 0xffffff) { if (laddr > 0xffffff) {
if (vdma_debug) if (vdma_debug)
...@@ -228,8 +228,7 @@ int vdma_remap(unsigned long laddr, unsigned long paddr, unsigned long size) ...@@ -228,8 +228,7 @@ int vdma_remap(unsigned long laddr, unsigned long paddr, unsigned long size)
return -EINVAL; /* invalid physical address */ return -EINVAL; /* invalid physical address */
} }
npages = pages = pages = (((paddr & (VDMA_PAGESIZE - 1)) + size) >> 12) + 1;
(((paddr & (VDMA_PAGESIZE - 1)) + size) >> 12) + 1;
first = laddr >> 12; first = laddr >> 12;
if (vdma_debug) if (vdma_debug)
printk("vdma_remap: first=%x, pages=%x\n", first, pages); printk("vdma_remap: first=%x, pages=%x\n", first, pages);
......
...@@ -242,9 +242,7 @@ EXPORT_SYMBOL_GPL(jz4740_dma_get_residue); ...@@ -242,9 +242,7 @@ EXPORT_SYMBOL_GPL(jz4740_dma_get_residue);
static void jz4740_dma_chan_irq(struct jz4740_dma_chan *dma) static void jz4740_dma_chan_irq(struct jz4740_dma_chan *dma)
{ {
uint32_t status; (void) jz4740_dma_read(JZ_REG_DMA_STATUS_CTRL(dma->id));
status = jz4740_dma_read(JZ_REG_DMA_STATUS_CTRL(dma->id));
jz4740_dma_write_mask(JZ_REG_DMA_STATUS_CTRL(dma->id), 0, jz4740_dma_write_mask(JZ_REG_DMA_STATUS_CTRL(dma->id), 0,
JZ_DMA_STATUS_CTRL_ENABLE | JZ_DMA_STATUS_CTRL_TRANSFER_DONE); JZ_DMA_STATUS_CTRL_ENABLE | JZ_DMA_STATUS_CTRL_TRANSFER_DONE);
......
...@@ -89,7 +89,7 @@ static int jz4740_clockevent_set_next(unsigned long evt, ...@@ -89,7 +89,7 @@ static int jz4740_clockevent_set_next(unsigned long evt,
static struct clock_event_device jz4740_clockevent = { static struct clock_event_device jz4740_clockevent = {
.name = "jz4740-timer", .name = "jz4740-timer",
.features = CLOCK_EVT_FEAT_PERIODIC, .features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT,
.set_next_event = jz4740_clockevent_set_next, .set_next_event = jz4740_clockevent_set_next,
.set_mode = jz4740_clockevent_set_mode, .set_mode = jz4740_clockevent_set_mode,
.rating = 200, .rating = 200,
......
...@@ -27,11 +27,13 @@ void jz4740_timer_enable_watchdog(void) ...@@ -27,11 +27,13 @@ void jz4740_timer_enable_watchdog(void)
{ {
writel(BIT(16), jz4740_timer_base + JZ_REG_TIMER_STOP_CLEAR); writel(BIT(16), jz4740_timer_base + JZ_REG_TIMER_STOP_CLEAR);
} }
EXPORT_SYMBOL_GPL(jz4740_timer_enable_watchdog);
void jz4740_timer_disable_watchdog(void) void jz4740_timer_disable_watchdog(void)
{ {
writel(BIT(16), jz4740_timer_base + JZ_REG_TIMER_STOP_SET); writel(BIT(16), jz4740_timer_base + JZ_REG_TIMER_STOP_SET);
} }
EXPORT_SYMBOL_GPL(jz4740_timer_disable_watchdog);
void __init jz4740_timer_init(void) void __init jz4740_timer_init(void)
{ {
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#define JAL 0x0c000000 /* jump & link: ip --> ra, jump to target */ #define JAL 0x0c000000 /* jump & link: ip --> ra, jump to target */
#define ADDR_MASK 0x03ffffff /* op_code|addr : 31...26|25 ....0 */ #define ADDR_MASK 0x03ffffff /* op_code|addr : 31...26|25 ....0 */
#define JUMP_RANGE_MASK ((1UL << 28) - 1)
#define INSN_NOP 0x00000000 /* nop */ #define INSN_NOP 0x00000000 /* nop */
#define INSN_JAL(addr) \ #define INSN_JAL(addr) \
...@@ -44,12 +45,12 @@ static inline void ftrace_dyn_arch_init_insns(void) ...@@ -44,12 +45,12 @@ static inline void ftrace_dyn_arch_init_insns(void)
/* jal (ftrace_caller + 8), jump over the first two instruction */ /* jal (ftrace_caller + 8), jump over the first two instruction */
buf = (u32 *)&insn_jal_ftrace_caller; buf = (u32 *)&insn_jal_ftrace_caller;
uasm_i_jal(&buf, (FTRACE_ADDR + 8)); uasm_i_jal(&buf, (FTRACE_ADDR + 8) & JUMP_RANGE_MASK);
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
/* j ftrace_graph_caller */ /* j ftrace_graph_caller */
buf = (u32 *)&insn_j_ftrace_graph_caller; buf = (u32 *)&insn_j_ftrace_graph_caller;
uasm_i_j(&buf, (unsigned long)ftrace_graph_caller); uasm_i_j(&buf, (unsigned long)ftrace_graph_caller & JUMP_RANGE_MASK);
#endif #endif
} }
......
...@@ -540,8 +540,8 @@ asmlinkage void do_syscall_trace(struct pt_regs *regs, int entryexit) ...@@ -540,8 +540,8 @@ asmlinkage void do_syscall_trace(struct pt_regs *regs, int entryexit)
secure_computing(regs->regs[2]); secure_computing(regs->regs[2]);
if (unlikely(current->audit_context) && entryexit) if (unlikely(current->audit_context) && entryexit)
audit_syscall_exit(AUDITSC_RESULT(regs->regs[2]), audit_syscall_exit(AUDITSC_RESULT(regs->regs[7]),
regs->regs[2]); -regs->regs[2]);
if (!(current->ptrace & PT_PTRACED)) if (!(current->ptrace & PT_PTRACED))
goto out; goto out;
......
...@@ -565,7 +565,7 @@ einval: li v0, -ENOSYS ...@@ -565,7 +565,7 @@ einval: li v0, -ENOSYS
sys sys_ioprio_get 2 /* 4315 */ sys sys_ioprio_get 2 /* 4315 */
sys sys_utimensat 4 sys sys_utimensat 4
sys sys_signalfd 3 sys sys_signalfd 3
sys sys_ni_syscall 0 sys sys_ni_syscall 0 /* was timerfd */
sys sys_eventfd 1 sys sys_eventfd 1
sys sys_fallocate 6 /* 4320 */ sys sys_fallocate 6 /* 4320 */
sys sys_timerfd_create 2 sys sys_timerfd_create 2
......
...@@ -404,7 +404,7 @@ sys_call_table: ...@@ -404,7 +404,7 @@ sys_call_table:
PTR sys_ioprio_get PTR sys_ioprio_get
PTR sys_utimensat /* 5275 */ PTR sys_utimensat /* 5275 */
PTR sys_signalfd PTR sys_signalfd
PTR sys_ni_syscall PTR sys_ni_syscall /* was timerfd */
PTR sys_eventfd PTR sys_eventfd
PTR sys_fallocate PTR sys_fallocate
PTR sys_timerfd_create /* 5280 */ PTR sys_timerfd_create /* 5280 */
......
...@@ -403,7 +403,7 @@ EXPORT(sysn32_call_table) ...@@ -403,7 +403,7 @@ EXPORT(sysn32_call_table)
PTR sys_ioprio_get PTR sys_ioprio_get
PTR compat_sys_utimensat PTR compat_sys_utimensat
PTR compat_sys_signalfd /* 6280 */ PTR compat_sys_signalfd /* 6280 */
PTR sys_ni_syscall PTR sys_ni_syscall /* was timerfd */
PTR sys_eventfd PTR sys_eventfd
PTR sys_fallocate PTR sys_fallocate
PTR sys_timerfd_create PTR sys_timerfd_create
......
...@@ -522,7 +522,7 @@ sys_call_table: ...@@ -522,7 +522,7 @@ sys_call_table:
PTR sys_ioprio_get /* 4315 */ PTR sys_ioprio_get /* 4315 */
PTR compat_sys_utimensat PTR compat_sys_utimensat
PTR compat_sys_signalfd PTR compat_sys_signalfd
PTR sys_ni_syscall PTR sys_ni_syscall /* was timerfd */
PTR sys_eventfd PTR sys_eventfd
PTR sys32_fallocate /* 4320 */ PTR sys32_fallocate /* 4320 */
PTR sys_timerfd_create PTR sys_timerfd_create
......
...@@ -374,7 +374,8 @@ void __noreturn die(const char *str, struct pt_regs *regs) ...@@ -374,7 +374,8 @@ void __noreturn die(const char *str, struct pt_regs *regs)
unsigned long dvpret = dvpe(); unsigned long dvpret = dvpe();
#endif /* CONFIG_MIPS_MT_SMTC */ #endif /* CONFIG_MIPS_MT_SMTC */
notify_die(DIE_OOPS, str, regs, 0, regs_to_trapnr(regs), SIGSEGV); if (notify_die(DIE_OOPS, str, regs, 0, regs_to_trapnr(regs), SIGSEGV) == NOTIFY_STOP)
sig = 0;
console_verbose(); console_verbose();
spin_lock_irq(&die_lock); spin_lock_irq(&die_lock);
...@@ -383,9 +384,6 @@ void __noreturn die(const char *str, struct pt_regs *regs) ...@@ -383,9 +384,6 @@ void __noreturn die(const char *str, struct pt_regs *regs)
mips_mt_regdump(dvpret); mips_mt_regdump(dvpret);
#endif /* CONFIG_MIPS_MT_SMTC */ #endif /* CONFIG_MIPS_MT_SMTC */
if (notify_die(DIE_OOPS, str, regs, 0, regs_to_trapnr(regs), SIGSEGV) == NOTIFY_STOP)
sig = 0;
printk("%s[#%d]:\n", str, ++die_counter); printk("%s[#%d]:\n", str, ++die_counter);
show_registers(regs); show_registers(regs);
add_taint(TAINT_DIE); add_taint(TAINT_DIE);
......
...@@ -74,6 +74,7 @@ SECTIONS ...@@ -74,6 +74,7 @@ SECTIONS
INIT_TASK_DATA(PAGE_SIZE) INIT_TASK_DATA(PAGE_SIZE)
NOSAVE_DATA NOSAVE_DATA
CACHELINE_ALIGNED_DATA(1 << CONFIG_MIPS_L1_CACHE_SHIFT) CACHELINE_ALIGNED_DATA(1 << CONFIG_MIPS_L1_CACHE_SHIFT)
READ_MOSTLY_DATA(1 << CONFIG_MIPS_L1_CACHE_SHIFT)
DATA_DATA DATA_DATA
CONSTRUCTORS CONSTRUCTORS
} }
......
...@@ -29,9 +29,10 @@ unsigned long memsize, highmemsize; ...@@ -29,9 +29,10 @@ unsigned long memsize, highmemsize;
#define parse_even_earlier(res, option, p) \ #define parse_even_earlier(res, option, p) \
do { \ do { \
int ret; \ unsigned int tmp __maybe_unused; \
\
if (strncmp(option, (char *)p, strlen(option)) == 0) \ if (strncmp(option, (char *)p, strlen(option)) == 0) \
ret = strict_strtol((char *)p + strlen(option"="), 10, &res); \ tmp = strict_strtol((char *)p + strlen(option"="), 10, &res); \
} while (0) } while (0)
void __init prom_init_env(void) void __init prom_init_env(void)
......
...@@ -1075,7 +1075,6 @@ static int __cpuinit probe_scache(void) ...@@ -1075,7 +1075,6 @@ static int __cpuinit probe_scache(void)
unsigned long flags, addr, begin, end, pow2; unsigned long flags, addr, begin, end, pow2;
unsigned int config = read_c0_config(); unsigned int config = read_c0_config();
struct cpuinfo_mips *c = &current_cpu_data; struct cpuinfo_mips *c = &current_cpu_data;
int tmp;
if (config & CONF_SC) if (config & CONF_SC)
return 0; return 0;
...@@ -1108,7 +1107,6 @@ static int __cpuinit probe_scache(void) ...@@ -1108,7 +1107,6 @@ static int __cpuinit probe_scache(void)
/* Now search for the wrap around point. */ /* Now search for the wrap around point. */
pow2 = (128 * 1024); pow2 = (128 * 1024);
tmp = 0;
for (addr = begin + (128 * 1024); addr < end; addr = begin + pow2) { for (addr = begin + (128 * 1024); addr < end; addr = begin + pow2) {
cache_op(Index_Load_Tag_SD, addr); cache_op(Index_Load_Tag_SD, addr);
__asm__ __volatile__("nop; nop; nop; nop;"); /* hazard... */ __asm__ __volatile__("nop; nop; nop; nop;"); /* hazard... */
......
...@@ -1151,8 +1151,8 @@ static void __cpuinit build_r4000_tlb_refill_handler(void) ...@@ -1151,8 +1151,8 @@ static void __cpuinit build_r4000_tlb_refill_handler(void)
struct uasm_reloc *r = relocs; struct uasm_reloc *r = relocs;
u32 *f; u32 *f;
unsigned int final_len; unsigned int final_len;
struct mips_huge_tlb_info htlb_info; struct mips_huge_tlb_info htlb_info __maybe_unused;
enum vmalloc64_mode vmalloc_mode; enum vmalloc64_mode vmalloc_mode __maybe_unused;
memset(tlb_handler, 0, sizeof(tlb_handler)); memset(tlb_handler, 0, sizeof(tlb_handler));
memset(labels, 0, sizeof(labels)); memset(labels, 0, sizeof(labels));
......
...@@ -193,8 +193,6 @@ extern struct plat_smp_ops msmtc_smp_ops; ...@@ -193,8 +193,6 @@ extern struct plat_smp_ops msmtc_smp_ops;
void __init prom_init(void) void __init prom_init(void)
{ {
int result;
prom_argc = fw_arg0; prom_argc = fw_arg0;
_prom_argv = (int *) fw_arg1; _prom_argv = (int *) fw_arg1;
_prom_envp = (int *) fw_arg2; _prom_envp = (int *) fw_arg2;
...@@ -360,20 +358,14 @@ void __init prom_init(void) ...@@ -360,20 +358,14 @@ void __init prom_init(void)
#ifdef CONFIG_SERIAL_8250_CONSOLE #ifdef CONFIG_SERIAL_8250_CONSOLE
console_config(); console_config();
#endif #endif
/* Early detection of CMP support */
result = gcmp_probe(GCMP_BASE_ADDR, GCMP_ADDRSPACE_SZ);
#ifdef CONFIG_MIPS_CMP #ifdef CONFIG_MIPS_CMP
if (result) /* Early detection of CMP support */
if (gcmp_probe(GCMP_BASE_ADDR, GCMP_ADDRSPACE_SZ))
register_smp_ops(&cmp_smp_ops); register_smp_ops(&cmp_smp_ops);
else
#endif #endif
#ifdef CONFIG_MIPS_MT_SMP #ifdef CONFIG_MIPS_MT_SMP
#ifdef CONFIG_MIPS_CMP
if (!result)
register_smp_ops(&vsmp_smp_ops); register_smp_ops(&vsmp_smp_ops);
#else
register_smp_ops(&vsmp_smp_ops);
#endif
#endif #endif
#ifdef CONFIG_MIPS_MT_SMTC #ifdef CONFIG_MIPS_MT_SMTC
register_smp_ops(&msmtc_smp_ops); register_smp_ops(&msmtc_smp_ops);
......
...@@ -56,7 +56,6 @@ static DEFINE_RAW_SPINLOCK(mips_irq_lock); ...@@ -56,7 +56,6 @@ static DEFINE_RAW_SPINLOCK(mips_irq_lock);
static inline int mips_pcibios_iack(void) static inline int mips_pcibios_iack(void)
{ {
int irq; int irq;
u32 dummy;
/* /*
* Determine highest priority pending interrupt by performing * Determine highest priority pending interrupt by performing
...@@ -83,7 +82,7 @@ static inline int mips_pcibios_iack(void) ...@@ -83,7 +82,7 @@ static inline int mips_pcibios_iack(void)
BONITO_PCIMAP_CFG = 0x20000; BONITO_PCIMAP_CFG = 0x20000;
/* Flush Bonito register block */ /* Flush Bonito register block */
dummy = BONITO_PCIMAP_CFG; (void) BONITO_PCIMAP_CFG;
iob(); /* sync */ iob(); /* sync */
irq = __raw_readl((u32 *)_pcictrl_bonito_pcicfg); irq = __raw_readl((u32 *)_pcictrl_bonito_pcicfg);
......
...@@ -97,7 +97,7 @@ static int msp_per_irq_set_affinity(struct irq_data *d, ...@@ -97,7 +97,7 @@ static int msp_per_irq_set_affinity(struct irq_data *d,
static struct irq_chip msp_per_irq_controller = { static struct irq_chip msp_per_irq_controller = {
.name = "MSP_PER", .name = "MSP_PER",
.irq_enable = unmask_per_irq. .irq_enable = unmask_per_irq,
.irq_disable = mask_per_irq, .irq_disable = mask_per_irq,
.irq_ack = msp_per_irq_ack, .irq_ack = msp_per_irq_ack,
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
......
...@@ -35,7 +35,7 @@ LEAF(swsusp_arch_resume) ...@@ -35,7 +35,7 @@ LEAF(swsusp_arch_resume)
0: 0:
PTR_L t1, PBE_ADDRESS(t0) /* source */ PTR_L t1, PBE_ADDRESS(t0) /* source */
PTR_L t2, PBE_ORIG_ADDRESS(t0) /* destination */ PTR_L t2, PBE_ORIG_ADDRESS(t0) /* destination */
PTR_ADDIU t3, t1, PAGE_SIZE PTR_ADDU t3, t1, PAGE_SIZE
1: 1:
REG_L t8, (t1) REG_L t8, (t1)
REG_S t8, (t2) REG_S t8, (t2)
......
...@@ -185,7 +185,7 @@ int __init rb532_gpio_init(void) ...@@ -185,7 +185,7 @@ int __init rb532_gpio_init(void)
struct resource *r; struct resource *r;
r = rb532_gpio_reg0_res; r = rb532_gpio_reg0_res;
rb532_gpio_chip->regbase = ioremap_nocache(r->start, r->end - r->start); rb532_gpio_chip->regbase = ioremap_nocache(r->start, resource_size(r));
if (!rb532_gpio_chip->regbase) { if (!rb532_gpio_chip->regbase) {
printk(KERN_ERR "rb532: cannot remap GPIO register 0\n"); printk(KERN_ERR "rb532: cannot remap GPIO register 0\n");
......
...@@ -132,7 +132,7 @@ static struct platform_device eth1_device = { ...@@ -132,7 +132,7 @@ static struct platform_device eth1_device = {
*/ */
static int __init sgiseeq_devinit(void) static int __init sgiseeq_devinit(void)
{ {
unsigned int tmp; unsigned int pbdma __maybe_unused;
int res, i; int res, i;
eth0_pd.hpc = hpc3c0; eth0_pd.hpc = hpc3c0;
...@@ -151,7 +151,7 @@ static int __init sgiseeq_devinit(void) ...@@ -151,7 +151,7 @@ static int __init sgiseeq_devinit(void)
/* Second HPC is missing? */ /* Second HPC is missing? */
if (ip22_is_fullhouse() || if (ip22_is_fullhouse() ||
get_dbe(tmp, (unsigned int *)&hpc3c1->pbdma[1])) get_dbe(pbdma, (unsigned int *)&hpc3c1->pbdma[1]))
return 0; return 0;
sgimc->giopar |= SGIMC_GIOPAR_MASTEREXP1 | SGIMC_GIOPAR_EXP164 | sgimc->giopar |= SGIMC_GIOPAR_MASTEREXP1 | SGIMC_GIOPAR_EXP164 |
......
...@@ -32,7 +32,7 @@ ...@@ -32,7 +32,7 @@
static unsigned long dosample(void) static unsigned long dosample(void)
{ {
u32 ct0, ct1; u32 ct0, ct1;
u8 msb, lsb; u8 msb;
/* Start the counter. */ /* Start the counter. */
sgint->tcword = (SGINT_TCWORD_CNT2 | SGINT_TCWORD_CALL | sgint->tcword = (SGINT_TCWORD_CNT2 | SGINT_TCWORD_CALL |
...@@ -46,7 +46,7 @@ static unsigned long dosample(void) ...@@ -46,7 +46,7 @@ static unsigned long dosample(void)
/* Latch and spin until top byte of counter2 is zero */ /* Latch and spin until top byte of counter2 is zero */
do { do {
writeb(SGINT_TCWORD_CNT2 | SGINT_TCWORD_CLAT, &sgint->tcword); writeb(SGINT_TCWORD_CNT2 | SGINT_TCWORD_CLAT, &sgint->tcword);
lsb = readb(&sgint->tcnt2); (void) readb(&sgint->tcnt2);
msb = readb(&sgint->tcnt2); msb = readb(&sgint->tcnt2);
ct1 = read_c0_count(); ct1 = read_c0_count();
} while (msb); } while (msb);
......
...@@ -29,7 +29,6 @@ unsigned long hub_pio_map(cnodeid_t cnode, xwidgetnum_t widget, ...@@ -29,7 +29,6 @@ unsigned long hub_pio_map(cnodeid_t cnode, xwidgetnum_t widget,
unsigned long xtalk_addr, size_t size) unsigned long xtalk_addr, size_t size)
{ {
nasid_t nasid = COMPACT_TO_NASID_NODEID(cnode); nasid_t nasid = COMPACT_TO_NASID_NODEID(cnode);
volatile hubreg_t junk;
unsigned i; unsigned i;
/* use small-window mapping if possible */ /* use small-window mapping if possible */
...@@ -64,7 +63,7 @@ unsigned long hub_pio_map(cnodeid_t cnode, xwidgetnum_t widget, ...@@ -64,7 +63,7 @@ unsigned long hub_pio_map(cnodeid_t cnode, xwidgetnum_t widget,
* after we write it. * after we write it.
*/ */
IIO_ITTE_PUT(nasid, i, HUB_PIO_MAP_TO_MEM, widget, xtalk_addr); IIO_ITTE_PUT(nasid, i, HUB_PIO_MAP_TO_MEM, widget, xtalk_addr);
junk = HUB_L(IIO_ITTE_GET(nasid, i)); (void) HUB_L(IIO_ITTE_GET(nasid, i));
return NODE_BWIN_BASE(nasid, widget) + (xtalk_addr % BWIN_SIZE); return NODE_BWIN_BASE(nasid, widget) + (xtalk_addr % BWIN_SIZE);
} }
......
...@@ -54,11 +54,8 @@ void __init setup_replication_mask(void) ...@@ -54,11 +54,8 @@ void __init setup_replication_mask(void)
static __init void set_ktext_source(nasid_t client_nasid, nasid_t server_nasid) static __init void set_ktext_source(nasid_t client_nasid, nasid_t server_nasid)
{ {
cnodeid_t client_cnode;
kern_vars_t *kvp; kern_vars_t *kvp;
client_cnode = NASID_TO_COMPACT_NODEID(client_nasid);
kvp = &hub_data(client_nasid)->kern_vars; kvp = &hub_data(client_nasid)->kern_vars;
KERN_VARS_ADDR(client_nasid) = (unsigned long)kvp; KERN_VARS_ADDR(client_nasid) = (unsigned long)kvp;
......
...@@ -95,7 +95,7 @@ static void __init sni_a20r_timer_setup(void) ...@@ -95,7 +95,7 @@ static void __init sni_a20r_timer_setup(void)
static __init unsigned long dosample(void) static __init unsigned long dosample(void)
{ {
u32 ct0, ct1; u32 ct0, ct1;
volatile u8 msb, lsb; volatile u8 msb;
/* Start the counter. */ /* Start the counter. */
outb_p(0x34, 0x43); outb_p(0x34, 0x43);
...@@ -108,7 +108,7 @@ static __init unsigned long dosample(void) ...@@ -108,7 +108,7 @@ static __init unsigned long dosample(void)
/* Latch and spin until top byte of counter0 is zero */ /* Latch and spin until top byte of counter0 is zero */
do { do {
outb(0x00, 0x43); outb(0x00, 0x43);
lsb = inb(0x40); (void) inb(0x40);
msb = inb(0x40); msb = inb(0x40);
ct1 = read_c0_count(); ct1 = read_c0_count();
} while (msb); } while (msb);
......
...@@ -933,12 +933,16 @@ int ptrace_set_debugreg(struct task_struct *task, unsigned long addr, ...@@ -933,12 +933,16 @@ int ptrace_set_debugreg(struct task_struct *task, unsigned long addr,
if (data && !(data & DABR_TRANSLATION)) if (data && !(data & DABR_TRANSLATION))
return -EIO; return -EIO;
#ifdef CONFIG_HAVE_HW_BREAKPOINT #ifdef CONFIG_HAVE_HW_BREAKPOINT
if (ptrace_get_breakpoints(task) < 0)
return -ESRCH;
bp = thread->ptrace_bps[0]; bp = thread->ptrace_bps[0];
if ((!data) || !(data & (DABR_DATA_WRITE | DABR_DATA_READ))) { if ((!data) || !(data & (DABR_DATA_WRITE | DABR_DATA_READ))) {
if (bp) { if (bp) {
unregister_hw_breakpoint(bp); unregister_hw_breakpoint(bp);
thread->ptrace_bps[0] = NULL; thread->ptrace_bps[0] = NULL;
} }
ptrace_put_breakpoints(task);
return 0; return 0;
} }
if (bp) { if (bp) {
...@@ -948,9 +952,12 @@ int ptrace_set_debugreg(struct task_struct *task, unsigned long addr, ...@@ -948,9 +952,12 @@ int ptrace_set_debugreg(struct task_struct *task, unsigned long addr,
(DABR_DATA_WRITE | DABR_DATA_READ), (DABR_DATA_WRITE | DABR_DATA_READ),
&attr.bp_type); &attr.bp_type);
ret = modify_user_hw_breakpoint(bp, &attr); ret = modify_user_hw_breakpoint(bp, &attr);
if (ret) if (ret) {
ptrace_put_breakpoints(task);
return ret; return ret;
}
thread->ptrace_bps[0] = bp; thread->ptrace_bps[0] = bp;
ptrace_put_breakpoints(task);
thread->dabr = data; thread->dabr = data;
return 0; return 0;
} }
...@@ -965,9 +972,12 @@ int ptrace_set_debugreg(struct task_struct *task, unsigned long addr, ...@@ -965,9 +972,12 @@ int ptrace_set_debugreg(struct task_struct *task, unsigned long addr,
ptrace_triggered, task); ptrace_triggered, task);
if (IS_ERR(bp)) { if (IS_ERR(bp)) {
thread->ptrace_bps[0] = NULL; thread->ptrace_bps[0] = NULL;
ptrace_put_breakpoints(task);
return PTR_ERR(bp); return PTR_ERR(bp);
} }
ptrace_put_breakpoints(task);
#endif /* CONFIG_HAVE_HW_BREAKPOINT */ #endif /* CONFIG_HAVE_HW_BREAKPOINT */
/* Move contents to the DABR register */ /* Move contents to the DABR register */
......
...@@ -318,17 +318,20 @@ static const struct platform_suspend_ops mpc83xx_suspend_ops = { ...@@ -318,17 +318,20 @@ static const struct platform_suspend_ops mpc83xx_suspend_ops = {
.end = mpc83xx_suspend_end, .end = mpc83xx_suspend_end,
}; };
static struct of_device_id pmc_match[];
static int pmc_probe(struct platform_device *ofdev) static int pmc_probe(struct platform_device *ofdev)
{ {
const struct of_device_id *match;
struct device_node *np = ofdev->dev.of_node; struct device_node *np = ofdev->dev.of_node;
struct resource res; struct resource res;
struct pmc_type *type; struct pmc_type *type;
int ret = 0; int ret = 0;
if (!ofdev->dev.of_match) match = of_match_device(pmc_match, &ofdev->dev);
if (!match)
return -EINVAL; return -EINVAL;
type = ofdev->dev.of_match->data; type = match->data;
if (!of_device_is_available(np)) if (!of_device_is_available(np))
return -ENODEV; return -ENODEV;
......
...@@ -304,8 +304,10 @@ static int __devinit fsl_msi_setup_hwirq(struct fsl_msi *msi, ...@@ -304,8 +304,10 @@ static int __devinit fsl_msi_setup_hwirq(struct fsl_msi *msi,
return 0; return 0;
} }
static const struct of_device_id fsl_of_msi_ids[];
static int __devinit fsl_of_msi_probe(struct platform_device *dev) static int __devinit fsl_of_msi_probe(struct platform_device *dev)
{ {
const struct of_device_id *match;
struct fsl_msi *msi; struct fsl_msi *msi;
struct resource res; struct resource res;
int err, i, j, irq_index, count; int err, i, j, irq_index, count;
...@@ -316,9 +318,10 @@ static int __devinit fsl_of_msi_probe(struct platform_device *dev) ...@@ -316,9 +318,10 @@ static int __devinit fsl_of_msi_probe(struct platform_device *dev)
u32 offset; u32 offset;
static const u32 all_avail[] = { 0, NR_MSI_IRQS }; static const u32 all_avail[] = { 0, NR_MSI_IRQS };
if (!dev->dev.of_match) match = of_match_device(fsl_of_msi_ids, &dev->dev);
if (!match)
return -EINVAL; return -EINVAL;
features = dev->dev.of_match->data; features = match->data;
printk(KERN_DEBUG "Setting up Freescale MSI support\n"); printk(KERN_DEBUG "Setting up Freescale MSI support\n");
......
...@@ -9,9 +9,22 @@ ...@@ -9,9 +9,22 @@
#define _ASM_S390_DIAG_H #define _ASM_S390_DIAG_H
/* /*
* Diagnose 10: Release pages * Diagnose 10: Release page range
*/ */
extern void diag10(unsigned long addr); static inline void diag10_range(unsigned long start_pfn, unsigned long num_pfn)
{
unsigned long start_addr, end_addr;
start_addr = start_pfn << PAGE_SHIFT;
end_addr = (start_pfn + num_pfn - 1) << PAGE_SHIFT;
asm volatile(
"0: diag %0,%1,0x10\n"
"1:\n"
EX_TABLE(0b, 1b)
EX_TABLE(1b, 1b)
: : "a" (start_addr), "a" (end_addr));
}
/* /*
* Diagnose 14: Input spool file manipulation * Diagnose 14: Input spool file manipulation
......
...@@ -23,7 +23,7 @@ static inline int init_new_context(struct task_struct *tsk, ...@@ -23,7 +23,7 @@ static inline int init_new_context(struct task_struct *tsk,
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
mm->context.asce_bits |= _ASCE_TYPE_REGION3; mm->context.asce_bits |= _ASCE_TYPE_REGION3;
#endif #endif
if (current->mm->context.alloc_pgste) { if (current->mm && current->mm->context.alloc_pgste) {
/* /*
* alloc_pgste indicates, that any NEW context will be created * alloc_pgste indicates, that any NEW context will be created
* with extended page tables. The old context is unchanged. The * with extended page tables. The old context is unchanged. The
......
...@@ -8,27 +8,6 @@ ...@@ -8,27 +8,6 @@
#include <linux/module.h> #include <linux/module.h>
#include <asm/diag.h> #include <asm/diag.h>
/*
* Diagnose 10: Release pages
*/
void diag10(unsigned long addr)
{
if (addr >= 0x7ff00000)
return;
asm volatile(
#ifdef CONFIG_64BIT
" sam31\n"
" diag %0,%0,0x10\n"
"0: sam64\n"
#else
" diag %0,%0,0x10\n"
"0:\n"
#endif
EX_TABLE(0b, 0b)
: : "a" (addr));
}
EXPORT_SYMBOL(diag10);
/* /*
* Diagnose 14: Input spool file manipulation * Diagnose 14: Input spool file manipulation
*/ */
......
...@@ -672,6 +672,7 @@ static struct insn opcode_b2[] = { ...@@ -672,6 +672,7 @@ static struct insn opcode_b2[] = {
{ "rp", 0x77, INSTR_S_RD }, { "rp", 0x77, INSTR_S_RD },
{ "stcke", 0x78, INSTR_S_RD }, { "stcke", 0x78, INSTR_S_RD },
{ "sacf", 0x79, INSTR_S_RD }, { "sacf", 0x79, INSTR_S_RD },
{ "spp", 0x80, INSTR_S_RD },
{ "stsi", 0x7d, INSTR_S_RD }, { "stsi", 0x7d, INSTR_S_RD },
{ "srnm", 0x99, INSTR_S_RD }, { "srnm", 0x99, INSTR_S_RD },
{ "stfpc", 0x9c, INSTR_S_RD }, { "stfpc", 0x9c, INSTR_S_RD },
......
...@@ -836,7 +836,7 @@ restart_base: ...@@ -836,7 +836,7 @@ restart_base:
stosm __SF_EMPTY(%r15),0x04 # now we can turn dat on stosm __SF_EMPTY(%r15),0x04 # now we can turn dat on
basr %r14,0 basr %r14,0
l %r14,restart_addr-.(%r14) l %r14,restart_addr-.(%r14)
br %r14 # branch to start_secondary basr %r14,%r14 # branch to start_secondary
restart_addr: restart_addr:
.long start_secondary .long start_secondary
.align 8 .align 8
......
...@@ -841,7 +841,7 @@ restart_base: ...@@ -841,7 +841,7 @@ restart_base:
mvc __LC_SYSTEM_TIMER(8),__TI_system_timer(%r1) mvc __LC_SYSTEM_TIMER(8),__TI_system_timer(%r1)
xc __LC_STEAL_TIMER(8),__LC_STEAL_TIMER xc __LC_STEAL_TIMER(8),__LC_STEAL_TIMER
stosm __SF_EMPTY(%r15),0x04 # now we can turn dat on stosm __SF_EMPTY(%r15),0x04 # now we can turn dat on
jg start_secondary brasl %r14,start_secondary
.align 8 .align 8
restart_vtime: restart_vtime:
.long 0x7fffffff,0xffffffff .long 0x7fffffff,0xffffffff
......
...@@ -91,7 +91,7 @@ static long cmm_alloc_pages(long nr, long *counter, ...@@ -91,7 +91,7 @@ static long cmm_alloc_pages(long nr, long *counter,
} else } else
free_page((unsigned long) npa); free_page((unsigned long) npa);
} }
diag10(addr); diag10_range(addr >> PAGE_SHIFT, 1);
pa->pages[pa->index++] = addr; pa->pages[pa->index++] = addr;
(*counter)++; (*counter)++;
spin_unlock(&cmm_lock); spin_unlock(&cmm_lock);
......
...@@ -1021,20 +1021,14 @@ int hwsampler_deallocate() ...@@ -1021,20 +1021,14 @@ int hwsampler_deallocate()
return rc; return rc;
} }
long hwsampler_query_min_interval(void) unsigned long hwsampler_query_min_interval(void)
{ {
if (min_sampler_rate) return min_sampler_rate;
return min_sampler_rate;
else
return -EINVAL;
} }
long hwsampler_query_max_interval(void) unsigned long hwsampler_query_max_interval(void)
{ {
if (max_sampler_rate) return max_sampler_rate;
return max_sampler_rate;
else
return -EINVAL;
} }
unsigned long hwsampler_get_sample_overflow_count(unsigned int cpu) unsigned long hwsampler_get_sample_overflow_count(unsigned int cpu)
......
...@@ -102,8 +102,8 @@ int hwsampler_setup(void); ...@@ -102,8 +102,8 @@ int hwsampler_setup(void);
int hwsampler_shutdown(void); int hwsampler_shutdown(void);
int hwsampler_allocate(unsigned long sdbt, unsigned long sdb); int hwsampler_allocate(unsigned long sdbt, unsigned long sdb);
int hwsampler_deallocate(void); int hwsampler_deallocate(void);
long hwsampler_query_min_interval(void); unsigned long hwsampler_query_min_interval(void);
long hwsampler_query_max_interval(void); unsigned long hwsampler_query_max_interval(void);
int hwsampler_start_all(unsigned long interval); int hwsampler_start_all(unsigned long interval);
int hwsampler_stop_all(void); int hwsampler_stop_all(void);
int hwsampler_deactivate(unsigned int cpu); int hwsampler_deactivate(unsigned int cpu);
......
...@@ -145,15 +145,11 @@ static int oprofile_hwsampler_init(struct oprofile_operations *ops) ...@@ -145,15 +145,11 @@ static int oprofile_hwsampler_init(struct oprofile_operations *ops)
* create hwsampler files only if hwsampler_setup() succeeds. * create hwsampler files only if hwsampler_setup() succeeds.
*/ */
oprofile_min_interval = hwsampler_query_min_interval(); oprofile_min_interval = hwsampler_query_min_interval();
if (oprofile_min_interval < 0) { if (oprofile_min_interval == 0)
oprofile_min_interval = 0;
return -ENODEV; return -ENODEV;
}
oprofile_max_interval = hwsampler_query_max_interval(); oprofile_max_interval = hwsampler_query_max_interval();
if (oprofile_max_interval < 0) { if (oprofile_max_interval == 0)
oprofile_max_interval = 0;
return -ENODEV; return -ENODEV;
}
if (oprofile_timer_init(ops)) if (oprofile_timer_init(ops))
return -ENODEV; return -ENODEV;
......
...@@ -117,7 +117,11 @@ void user_enable_single_step(struct task_struct *child) ...@@ -117,7 +117,11 @@ void user_enable_single_step(struct task_struct *child)
set_tsk_thread_flag(child, TIF_SINGLESTEP); set_tsk_thread_flag(child, TIF_SINGLESTEP);
if (ptrace_get_breakpoints(child) < 0)
return;
set_single_step(child, pc); set_single_step(child, pc);
ptrace_put_breakpoints(child);
} }
void user_disable_single_step(struct task_struct *child) void user_disable_single_step(struct task_struct *child)
......
...@@ -165,7 +165,7 @@ static int __devinit apc_probe(struct platform_device *op) ...@@ -165,7 +165,7 @@ static int __devinit apc_probe(struct platform_device *op)
return 0; return 0;
} }
static struct of_device_id __initdata apc_match[] = { static struct of_device_id apc_match[] = {
{ {
.name = APC_OBPNAME, .name = APC_OBPNAME,
}, },
......
...@@ -452,8 +452,10 @@ static void __devinit sabre_pbm_init(struct pci_pbm_info *pbm, ...@@ -452,8 +452,10 @@ static void __devinit sabre_pbm_init(struct pci_pbm_info *pbm,
sabre_scan_bus(pbm, &op->dev); sabre_scan_bus(pbm, &op->dev);
} }
static const struct of_device_id sabre_match[];
static int __devinit sabre_probe(struct platform_device *op) static int __devinit sabre_probe(struct platform_device *op)
{ {
const struct of_device_id *match;
const struct linux_prom64_registers *pr_regs; const struct linux_prom64_registers *pr_regs;
struct device_node *dp = op->dev.of_node; struct device_node *dp = op->dev.of_node;
struct pci_pbm_info *pbm; struct pci_pbm_info *pbm;
...@@ -463,7 +465,8 @@ static int __devinit sabre_probe(struct platform_device *op) ...@@ -463,7 +465,8 @@ static int __devinit sabre_probe(struct platform_device *op)
const u32 *vdma; const u32 *vdma;
u64 clear_irq; u64 clear_irq;
hummingbird_p = op->dev.of_match && (op->dev.of_match->data != NULL); match = of_match_device(sabre_match, &op->dev);
hummingbird_p = match && (match->data != NULL);
if (!hummingbird_p) { if (!hummingbird_p) {
struct device_node *cpu_dp; struct device_node *cpu_dp;
......
...@@ -1458,11 +1458,15 @@ static int __devinit __schizo_init(struct platform_device *op, unsigned long chi ...@@ -1458,11 +1458,15 @@ static int __devinit __schizo_init(struct platform_device *op, unsigned long chi
return err; return err;
} }
static const struct of_device_id schizo_match[];
static int __devinit schizo_probe(struct platform_device *op) static int __devinit schizo_probe(struct platform_device *op)
{ {
if (!op->dev.of_match) const struct of_device_id *match;
match = of_match_device(schizo_match, &op->dev);
if (!match)
return -EINVAL; return -EINVAL;
return __schizo_init(op, (unsigned long) op->dev.of_match->data); return __schizo_init(op, (unsigned long)match->data);
} }
/* The ordering of this table is very important. Some Tomatillo /* The ordering of this table is very important. Some Tomatillo
......
...@@ -69,7 +69,7 @@ static int __devinit pmc_probe(struct platform_device *op) ...@@ -69,7 +69,7 @@ static int __devinit pmc_probe(struct platform_device *op)
return 0; return 0;
} }
static struct of_device_id __initdata pmc_match[] = { static struct of_device_id pmc_match[] = {
{ {
.name = PMC_OBPNAME, .name = PMC_OBPNAME,
}, },
......
...@@ -53,6 +53,7 @@ cpumask_t smp_commenced_mask = CPU_MASK_NONE; ...@@ -53,6 +53,7 @@ cpumask_t smp_commenced_mask = CPU_MASK_NONE;
void __cpuinit smp_store_cpu_info(int id) void __cpuinit smp_store_cpu_info(int id)
{ {
int cpu_node; int cpu_node;
int mid;
cpu_data(id).udelay_val = loops_per_jiffy; cpu_data(id).udelay_val = loops_per_jiffy;
...@@ -60,10 +61,13 @@ void __cpuinit smp_store_cpu_info(int id) ...@@ -60,10 +61,13 @@ void __cpuinit smp_store_cpu_info(int id)
cpu_data(id).clock_tick = prom_getintdefault(cpu_node, cpu_data(id).clock_tick = prom_getintdefault(cpu_node,
"clock-frequency", 0); "clock-frequency", 0);
cpu_data(id).prom_node = cpu_node; cpu_data(id).prom_node = cpu_node;
cpu_data(id).mid = cpu_get_hwmid(cpu_node); mid = cpu_get_hwmid(cpu_node);
if (cpu_data(id).mid < 0) if (mid < 0) {
panic("No MID found for CPU%d at node 0x%08d", id, cpu_node); printk(KERN_NOTICE "No MID found for CPU%d at node 0x%08d", id, cpu_node);
mid = 0;
}
cpu_data(id).mid = mid;
} }
void __init smp_cpus_done(unsigned int max_cpus) void __init smp_cpus_done(unsigned int max_cpus)
......
...@@ -168,7 +168,7 @@ static int __devinit clock_probe(struct platform_device *op) ...@@ -168,7 +168,7 @@ static int __devinit clock_probe(struct platform_device *op)
return 0; return 0;
} }
static struct of_device_id __initdata clock_match[] = { static struct of_device_id clock_match[] = {
{ {
.name = "eeprom", .name = "eeprom",
}, },
......
...@@ -289,10 +289,16 @@ cc_end_cruft: ...@@ -289,10 +289,16 @@ cc_end_cruft:
/* Also, handle the alignment code out of band. */ /* Also, handle the alignment code out of band. */
cc_dword_align: cc_dword_align:
cmp %g1, 6 cmp %g1, 16
bl,a ccte bge 1f
srl %g1, 1, %o3
2: cmp %o3, 0
be,a ccte
andcc %g1, 0xf, %o3 andcc %g1, 0xf, %o3
andcc %o0, 0x1, %g0 andcc %o3, %o0, %g0 ! Check %o0 only (%o1 has the same last 2 bits)
be,a 2b
srl %o3, 1, %o3
1: andcc %o0, 0x1, %g0
bne ccslow bne ccslow
andcc %o0, 0x2, %g0 andcc %o0, 0x2, %g0
be 1f be 1f
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#include <stdio.h> #include <stdio.h>
#include <stdlib.h> #include <stdlib.h>
#include <unistd.h>
#include <errno.h> #include <errno.h>
#include <signal.h> #include <signal.h>
#include <string.h> #include <string.h>
...@@ -75,6 +76,26 @@ void setup_hostinfo(char *buf, int len) ...@@ -75,6 +76,26 @@ void setup_hostinfo(char *buf, int len)
host.release, host.version, host.machine); host.release, host.version, host.machine);
} }
/*
* We cannot use glibc's abort(). It makes use of tgkill() which
* has no effect within UML's kernel threads.
* After that glibc would execute an invalid instruction to kill
* the calling process and UML crashes with SIGSEGV.
*/
static inline void __attribute__ ((noreturn)) uml_abort(void)
{
sigset_t sig;
fflush(NULL);
if (!sigemptyset(&sig) && !sigaddset(&sig, SIGABRT))
sigprocmask(SIG_UNBLOCK, &sig, 0);
for (;;)
if (kill(getpid(), SIGABRT) < 0)
exit(127);
}
void os_dump_core(void) void os_dump_core(void)
{ {
int pid; int pid;
...@@ -116,5 +137,5 @@ void os_dump_core(void) ...@@ -116,5 +137,5 @@ void os_dump_core(void)
while ((pid = waitpid(-1, NULL, WNOHANG | __WALL)) > 0) while ((pid = waitpid(-1, NULL, WNOHANG | __WALL)) > 0)
os_kill_ptraced_process(pid, 0); os_kill_ptraced_process(pid, 0);
abort(); uml_abort();
} }
...@@ -78,6 +78,7 @@ ...@@ -78,6 +78,7 @@
#define APIC_DEST_LOGICAL 0x00800 #define APIC_DEST_LOGICAL 0x00800
#define APIC_DEST_PHYSICAL 0x00000 #define APIC_DEST_PHYSICAL 0x00000
#define APIC_DM_FIXED 0x00000 #define APIC_DM_FIXED 0x00000
#define APIC_DM_FIXED_MASK 0x00700
#define APIC_DM_LOWEST 0x00100 #define APIC_DM_LOWEST 0x00100
#define APIC_DM_SMI 0x00200 #define APIC_DM_SMI 0x00200
#define APIC_DM_REMRD 0x00300 #define APIC_DM_REMRD 0x00300
......
...@@ -299,6 +299,7 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn, ...@@ -299,6 +299,7 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
/* Install a pte for a particular vaddr in kernel space. */ /* Install a pte for a particular vaddr in kernel space. */
void set_pte_vaddr(unsigned long vaddr, pte_t pte); void set_pte_vaddr(unsigned long vaddr, pte_t pte);
extern void native_pagetable_reserve(u64 start, u64 end);
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
extern void native_pagetable_setup_start(pgd_t *base); extern void native_pagetable_setup_start(pgd_t *base);
extern void native_pagetable_setup_done(pgd_t *base); extern void native_pagetable_setup_done(pgd_t *base);
......
...@@ -94,6 +94,8 @@ ...@@ -94,6 +94,8 @@
/* after this # consecutive successes, bump up the throttle if it was lowered */ /* after this # consecutive successes, bump up the throttle if it was lowered */
#define COMPLETE_THRESHOLD 5 #define COMPLETE_THRESHOLD 5
#define UV_LB_SUBNODEID 0x10
/* /*
* number of entries in the destination side payload queue * number of entries in the destination side payload queue
*/ */
...@@ -124,7 +126,7 @@ ...@@ -124,7 +126,7 @@
* The distribution specification (32 bytes) is interpreted as a 256-bit * The distribution specification (32 bytes) is interpreted as a 256-bit
* distribution vector. Adjacent bits correspond to consecutive even numbered * distribution vector. Adjacent bits correspond to consecutive even numbered
* nodeIDs. The result of adding the index of a given bit to the 15-bit * nodeIDs. The result of adding the index of a given bit to the 15-bit
* 'base_dest_nodeid' field of the header corresponds to the * 'base_dest_nasid' field of the header corresponds to the
* destination nodeID associated with that specified bit. * destination nodeID associated with that specified bit.
*/ */
struct bau_target_uvhubmask { struct bau_target_uvhubmask {
...@@ -176,7 +178,7 @@ struct bau_msg_payload { ...@@ -176,7 +178,7 @@ struct bau_msg_payload {
struct bau_msg_header { struct bau_msg_header {
unsigned int dest_subnodeid:6; /* must be 0x10, for the LB */ unsigned int dest_subnodeid:6; /* must be 0x10, for the LB */
/* bits 5:0 */ /* bits 5:0 */
unsigned int base_dest_nodeid:15; /* nasid of the */ unsigned int base_dest_nasid:15; /* nasid of the */
/* bits 20:6 */ /* first bit in uvhub map */ /* bits 20:6 */ /* first bit in uvhub map */
unsigned int command:8; /* message type */ unsigned int command:8; /* message type */
/* bits 28:21 */ /* bits 28:21 */
...@@ -378,6 +380,10 @@ struct ptc_stats { ...@@ -378,6 +380,10 @@ struct ptc_stats {
unsigned long d_rcanceled; /* number of messages canceled by resets */ unsigned long d_rcanceled; /* number of messages canceled by resets */
}; };
struct hub_and_pnode {
short uvhub;
short pnode;
};
/* /*
* one per-cpu; to locate the software tables * one per-cpu; to locate the software tables
*/ */
...@@ -399,10 +405,12 @@ struct bau_control { ...@@ -399,10 +405,12 @@ struct bau_control {
int baudisabled; int baudisabled;
int set_bau_off; int set_bau_off;
short cpu; short cpu;
short osnode;
short uvhub_cpu; short uvhub_cpu;
short uvhub; short uvhub;
short cpus_in_socket; short cpus_in_socket;
short cpus_in_uvhub; short cpus_in_uvhub;
short partition_base_pnode;
unsigned short message_number; unsigned short message_number;
unsigned short uvhub_quiesce; unsigned short uvhub_quiesce;
short socket_acknowledge_count[DEST_Q_SIZE]; short socket_acknowledge_count[DEST_Q_SIZE];
...@@ -422,15 +430,16 @@ struct bau_control { ...@@ -422,15 +430,16 @@ struct bau_control {
int congested_period; int congested_period;
cycles_t period_time; cycles_t period_time;
long period_requests; long period_requests;
struct hub_and_pnode *target_hub_and_pnode;
}; };
static inline int bau_uvhub_isset(int uvhub, struct bau_target_uvhubmask *dstp) static inline int bau_uvhub_isset(int uvhub, struct bau_target_uvhubmask *dstp)
{ {
return constant_test_bit(uvhub, &dstp->bits[0]); return constant_test_bit(uvhub, &dstp->bits[0]);
} }
static inline void bau_uvhub_set(int uvhub, struct bau_target_uvhubmask *dstp) static inline void bau_uvhub_set(int pnode, struct bau_target_uvhubmask *dstp)
{ {
__set_bit(uvhub, &dstp->bits[0]); __set_bit(pnode, &dstp->bits[0]);
} }
static inline void bau_uvhubs_clear(struct bau_target_uvhubmask *dstp, static inline void bau_uvhubs_clear(struct bau_target_uvhubmask *dstp,
int nbits) int nbits)
......
...@@ -398,6 +398,8 @@ struct uv_blade_info { ...@@ -398,6 +398,8 @@ struct uv_blade_info {
unsigned short nr_online_cpus; unsigned short nr_online_cpus;
unsigned short pnode; unsigned short pnode;
short memory_nid; short memory_nid;
spinlock_t nmi_lock;
unsigned long nmi_count;
}; };
extern struct uv_blade_info *uv_blade_info; extern struct uv_blade_info *uv_blade_info;
extern short *uv_node_to_blade; extern short *uv_node_to_blade;
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
* *
* SGI UV MMR definitions * SGI UV MMR definitions
* *
* Copyright (C) 2007-2010 Silicon Graphics, Inc. All rights reserved. * Copyright (C) 2007-2011 Silicon Graphics, Inc. All rights reserved.
*/ */
#ifndef _ASM_X86_UV_UV_MMRS_H #ifndef _ASM_X86_UV_UV_MMRS_H
...@@ -1099,5 +1099,19 @@ union uvh_rtc1_int_config_u { ...@@ -1099,5 +1099,19 @@ union uvh_rtc1_int_config_u {
} s; } s;
}; };
/* ========================================================================= */
/* UVH_SCRATCH5 */
/* ========================================================================= */
#define UVH_SCRATCH5 0x2d0200UL
#define UVH_SCRATCH5_32 0x00778
#define UVH_SCRATCH5_SCRATCH5_SHFT 0
#define UVH_SCRATCH5_SCRATCH5_MASK 0xffffffffffffffffUL
union uvh_scratch5_u {
unsigned long v;
struct uvh_scratch5_s {
unsigned long scratch5 : 64; /* RW, W1CS */
} s;
};
#endif /* __ASM_UV_MMRS_X86_H__ */ #endif /* __ASM_UV_MMRS_X86_H__ */
...@@ -67,6 +67,17 @@ struct x86_init_oem { ...@@ -67,6 +67,17 @@ struct x86_init_oem {
void (*banner)(void); void (*banner)(void);
}; };
/**
* struct x86_init_mapping - platform specific initial kernel pagetable setup
* @pagetable_reserve: reserve a range of addresses for kernel pagetable usage
*
* For more details on the purpose of this hook, look in
* init_memory_mapping and the commit that added it.
*/
struct x86_init_mapping {
void (*pagetable_reserve)(u64 start, u64 end);
};
/** /**
* struct x86_init_paging - platform specific paging functions * struct x86_init_paging - platform specific paging functions
* @pagetable_setup_start: platform specific pre paging_init() call * @pagetable_setup_start: platform specific pre paging_init() call
...@@ -123,6 +134,7 @@ struct x86_init_ops { ...@@ -123,6 +134,7 @@ struct x86_init_ops {
struct x86_init_mpparse mpparse; struct x86_init_mpparse mpparse;
struct x86_init_irqs irqs; struct x86_init_irqs irqs;
struct x86_init_oem oem; struct x86_init_oem oem;
struct x86_init_mapping mapping;
struct x86_init_paging paging; struct x86_init_paging paging;
struct x86_init_timers timers; struct x86_init_timers timers;
struct x86_init_iommu iommu; struct x86_init_iommu iommu;
......
...@@ -37,6 +37,13 @@ ...@@ -37,6 +37,13 @@
#include <asm/smp.h> #include <asm/smp.h>
#include <asm/x86_init.h> #include <asm/x86_init.h>
#include <asm/emergency-restart.h> #include <asm/emergency-restart.h>
#include <asm/nmi.h>
/* BMC sets a bit this MMR non-zero before sending an NMI */
#define UVH_NMI_MMR UVH_SCRATCH5
#define UVH_NMI_MMR_CLEAR (UVH_NMI_MMR + 8)
#define UV_NMI_PENDING_MASK (1UL << 63)
DEFINE_PER_CPU(unsigned long, cpu_last_nmi_count);
DEFINE_PER_CPU(int, x2apic_extra_bits); DEFINE_PER_CPU(int, x2apic_extra_bits);
...@@ -642,18 +649,46 @@ void __cpuinit uv_cpu_init(void) ...@@ -642,18 +649,46 @@ void __cpuinit uv_cpu_init(void)
*/ */
int uv_handle_nmi(struct notifier_block *self, unsigned long reason, void *data) int uv_handle_nmi(struct notifier_block *self, unsigned long reason, void *data)
{ {
unsigned long real_uv_nmi;
int bid;
if (reason != DIE_NMIUNKNOWN) if (reason != DIE_NMIUNKNOWN)
return NOTIFY_OK; return NOTIFY_OK;
if (in_crash_kexec) if (in_crash_kexec)
/* do nothing if entering the crash kernel */ /* do nothing if entering the crash kernel */
return NOTIFY_OK; return NOTIFY_OK;
/* /*
* Use a lock so only one cpu prints at a time * Each blade has an MMR that indicates when an NMI has been sent
* to prevent intermixed output. * to cpus on the blade. If an NMI is detected, atomically
* clear the MMR and update a per-blade NMI count used to
* cause each cpu on the blade to notice a new NMI.
*/
bid = uv_numa_blade_id();
real_uv_nmi = (uv_read_local_mmr(UVH_NMI_MMR) & UV_NMI_PENDING_MASK);
if (unlikely(real_uv_nmi)) {
spin_lock(&uv_blade_info[bid].nmi_lock);
real_uv_nmi = (uv_read_local_mmr(UVH_NMI_MMR) & UV_NMI_PENDING_MASK);
if (real_uv_nmi) {
uv_blade_info[bid].nmi_count++;
uv_write_local_mmr(UVH_NMI_MMR_CLEAR, UV_NMI_PENDING_MASK);
}
spin_unlock(&uv_blade_info[bid].nmi_lock);
}
if (likely(__get_cpu_var(cpu_last_nmi_count) == uv_blade_info[bid].nmi_count))
return NOTIFY_DONE;
__get_cpu_var(cpu_last_nmi_count) = uv_blade_info[bid].nmi_count;
/*
* Use a lock so only one cpu prints at a time.
* This prevents intermixed output.
*/ */
spin_lock(&uv_nmi_lock); spin_lock(&uv_nmi_lock);
pr_info("NMI stack dump cpu %u:\n", smp_processor_id()); pr_info("UV NMI stack dump cpu %u:\n", smp_processor_id());
dump_stack(); dump_stack();
spin_unlock(&uv_nmi_lock); spin_unlock(&uv_nmi_lock);
...@@ -661,7 +696,8 @@ int uv_handle_nmi(struct notifier_block *self, unsigned long reason, void *data) ...@@ -661,7 +696,8 @@ int uv_handle_nmi(struct notifier_block *self, unsigned long reason, void *data)
} }
static struct notifier_block uv_dump_stack_nmi_nb = { static struct notifier_block uv_dump_stack_nmi_nb = {
.notifier_call = uv_handle_nmi .notifier_call = uv_handle_nmi,
.priority = NMI_LOCAL_LOW_PRIOR - 1,
}; };
void uv_register_nmi_notifier(void) void uv_register_nmi_notifier(void)
...@@ -720,8 +756,9 @@ void __init uv_system_init(void) ...@@ -720,8 +756,9 @@ void __init uv_system_init(void)
printk(KERN_DEBUG "UV: Found %d blades\n", uv_num_possible_blades()); printk(KERN_DEBUG "UV: Found %d blades\n", uv_num_possible_blades());
bytes = sizeof(struct uv_blade_info) * uv_num_possible_blades(); bytes = sizeof(struct uv_blade_info) * uv_num_possible_blades();
uv_blade_info = kmalloc(bytes, GFP_KERNEL); uv_blade_info = kzalloc(bytes, GFP_KERNEL);
BUG_ON(!uv_blade_info); BUG_ON(!uv_blade_info);
for (blade = 0; blade < uv_num_possible_blades(); blade++) for (blade = 0; blade < uv_num_possible_blades(); blade++)
uv_blade_info[blade].memory_nid = -1; uv_blade_info[blade].memory_nid = -1;
...@@ -747,6 +784,7 @@ void __init uv_system_init(void) ...@@ -747,6 +784,7 @@ void __init uv_system_init(void)
uv_blade_info[blade].pnode = pnode; uv_blade_info[blade].pnode = pnode;
uv_blade_info[blade].nr_possible_cpus = 0; uv_blade_info[blade].nr_possible_cpus = 0;
uv_blade_info[blade].nr_online_cpus = 0; uv_blade_info[blade].nr_online_cpus = 0;
spin_lock_init(&uv_blade_info[blade].nmi_lock);
max_pnode = max(pnode, max_pnode); max_pnode = max(pnode, max_pnode);
blade++; blade++;
} }
......
...@@ -613,7 +613,7 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c) ...@@ -613,7 +613,7 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
#endif #endif
/* As a rule processors have APIC timer running in deep C states */ /* As a rule processors have APIC timer running in deep C states */
if (c->x86 >= 0xf && !cpu_has_amd_erratum(amd_erratum_400)) if (c->x86 > 0xf && !cpu_has_amd_erratum(amd_erratum_400))
set_cpu_cap(c, X86_FEATURE_ARAT); set_cpu_cap(c, X86_FEATURE_ARAT);
/* /*
...@@ -698,7 +698,7 @@ cpu_dev_register(amd_cpu_dev); ...@@ -698,7 +698,7 @@ cpu_dev_register(amd_cpu_dev);
*/ */
const int amd_erratum_400[] = const int amd_erratum_400[] =
AMD_OSVW_ERRATUM(1, AMD_MODEL_RANGE(0x0f, 0x4, 0x2, 0xff, 0xf), AMD_OSVW_ERRATUM(1, AMD_MODEL_RANGE(0xf, 0x41, 0x2, 0xff, 0xf),
AMD_MODEL_RANGE(0x10, 0x2, 0x1, 0xff, 0xf)); AMD_MODEL_RANGE(0x10, 0x2, 0x1, 0xff, 0xf));
EXPORT_SYMBOL_GPL(amd_erratum_400); EXPORT_SYMBOL_GPL(amd_erratum_400);
......
...@@ -509,6 +509,7 @@ static __cpuinit int allocate_threshold_blocks(unsigned int cpu, ...@@ -509,6 +509,7 @@ static __cpuinit int allocate_threshold_blocks(unsigned int cpu,
out_free: out_free:
if (b) { if (b) {
kobject_put(&b->kobj); kobject_put(&b->kobj);
list_del(&b->miscj);
kfree(b); kfree(b);
} }
return err; return err;
......
...@@ -446,18 +446,20 @@ void intel_init_thermal(struct cpuinfo_x86 *c) ...@@ -446,18 +446,20 @@ void intel_init_thermal(struct cpuinfo_x86 *c)
*/ */
rdmsr(MSR_IA32_MISC_ENABLE, l, h); rdmsr(MSR_IA32_MISC_ENABLE, l, h);
h = lvtthmr_init;
/* /*
* The initial value of thermal LVT entries on all APs always reads * The initial value of thermal LVT entries on all APs always reads
* 0x10000 because APs are woken up by BSP issuing INIT-SIPI-SIPI * 0x10000 because APs are woken up by BSP issuing INIT-SIPI-SIPI
* sequence to them and LVT registers are reset to 0s except for * sequence to them and LVT registers are reset to 0s except for
* the mask bits which are set to 1s when APs receive INIT IPI. * the mask bits which are set to 1s when APs receive INIT IPI.
* Always restore the value that BIOS has programmed on AP based on * If BIOS takes over the thermal interrupt and sets its interrupt
* BSP's info we saved since BIOS is always setting the same value * delivery mode to SMI (not fixed), it restores the value that the
* for all threads/cores * BIOS has programmed on AP based on BSP's info we saved since BIOS
* is always setting the same value for all threads/cores.
*/ */
apic_write(APIC_LVTTHMR, lvtthmr_init); if ((h & APIC_DM_FIXED_MASK) != APIC_DM_FIXED)
apic_write(APIC_LVTTHMR, lvtthmr_init);
h = lvtthmr_init;
if ((l & MSR_IA32_MISC_ENABLE_TM1) && (h & APIC_DM_SMI)) { if ((l & MSR_IA32_MISC_ENABLE_TM1) && (h & APIC_DM_SMI)) {
printk(KERN_DEBUG printk(KERN_DEBUG
......
...@@ -184,26 +184,23 @@ static __initconst const u64 snb_hw_cache_event_ids ...@@ -184,26 +184,23 @@ static __initconst const u64 snb_hw_cache_event_ids
}, },
}, },
[ C(LL ) ] = { [ C(LL ) ] = {
/*
* TBD: Need Off-core Response Performance Monitoring support
*/
[ C(OP_READ) ] = { [ C(OP_READ) ] = {
/* OFFCORE_RESPONSE_0.ANY_DATA.LOCAL_CACHE */ /* OFFCORE_RESPONSE.ANY_DATA.LOCAL_CACHE */
[ C(RESULT_ACCESS) ] = 0x01b7, [ C(RESULT_ACCESS) ] = 0x01b7,
/* OFFCORE_RESPONSE_1.ANY_DATA.ANY_LLC_MISS */ /* OFFCORE_RESPONSE.ANY_DATA.ANY_LLC_MISS */
[ C(RESULT_MISS) ] = 0x01bb, [ C(RESULT_MISS) ] = 0x01b7,
}, },
[ C(OP_WRITE) ] = { [ C(OP_WRITE) ] = {
/* OFFCORE_RESPONSE_0.ANY_RFO.LOCAL_CACHE */ /* OFFCORE_RESPONSE.ANY_RFO.LOCAL_CACHE */
[ C(RESULT_ACCESS) ] = 0x01b7, [ C(RESULT_ACCESS) ] = 0x01b7,
/* OFFCORE_RESPONSE_1.ANY_RFO.ANY_LLC_MISS */ /* OFFCORE_RESPONSE.ANY_RFO.ANY_LLC_MISS */
[ C(RESULT_MISS) ] = 0x01bb, [ C(RESULT_MISS) ] = 0x01b7,
}, },
[ C(OP_PREFETCH) ] = { [ C(OP_PREFETCH) ] = {
/* OFFCORE_RESPONSE_0.PREFETCH.LOCAL_CACHE */ /* OFFCORE_RESPONSE.PREFETCH.LOCAL_CACHE */
[ C(RESULT_ACCESS) ] = 0x01b7, [ C(RESULT_ACCESS) ] = 0x01b7,
/* OFFCORE_RESPONSE_1.PREFETCH.ANY_LLC_MISS */ /* OFFCORE_RESPONSE.PREFETCH.ANY_LLC_MISS */
[ C(RESULT_MISS) ] = 0x01bb, [ C(RESULT_MISS) ] = 0x01b7,
}, },
}, },
[ C(DTLB) ] = { [ C(DTLB) ] = {
...@@ -285,26 +282,26 @@ static __initconst const u64 westmere_hw_cache_event_ids ...@@ -285,26 +282,26 @@ static __initconst const u64 westmere_hw_cache_event_ids
}, },
[ C(LL ) ] = { [ C(LL ) ] = {
[ C(OP_READ) ] = { [ C(OP_READ) ] = {
/* OFFCORE_RESPONSE_0.ANY_DATA.LOCAL_CACHE */ /* OFFCORE_RESPONSE.ANY_DATA.LOCAL_CACHE */
[ C(RESULT_ACCESS) ] = 0x01b7, [ C(RESULT_ACCESS) ] = 0x01b7,
/* OFFCORE_RESPONSE_1.ANY_DATA.ANY_LLC_MISS */ /* OFFCORE_RESPONSE.ANY_DATA.ANY_LLC_MISS */
[ C(RESULT_MISS) ] = 0x01bb, [ C(RESULT_MISS) ] = 0x01b7,
}, },
/* /*
* Use RFO, not WRITEBACK, because a write miss would typically occur * Use RFO, not WRITEBACK, because a write miss would typically occur
* on RFO. * on RFO.
*/ */
[ C(OP_WRITE) ] = { [ C(OP_WRITE) ] = {
/* OFFCORE_RESPONSE_1.ANY_RFO.LOCAL_CACHE */ /* OFFCORE_RESPONSE.ANY_RFO.LOCAL_CACHE */
[ C(RESULT_ACCESS) ] = 0x01bb, [ C(RESULT_ACCESS) ] = 0x01b7,
/* OFFCORE_RESPONSE_0.ANY_RFO.ANY_LLC_MISS */ /* OFFCORE_RESPONSE.ANY_RFO.ANY_LLC_MISS */
[ C(RESULT_MISS) ] = 0x01b7, [ C(RESULT_MISS) ] = 0x01b7,
}, },
[ C(OP_PREFETCH) ] = { [ C(OP_PREFETCH) ] = {
/* OFFCORE_RESPONSE_0.PREFETCH.LOCAL_CACHE */ /* OFFCORE_RESPONSE.PREFETCH.LOCAL_CACHE */
[ C(RESULT_ACCESS) ] = 0x01b7, [ C(RESULT_ACCESS) ] = 0x01b7,
/* OFFCORE_RESPONSE_1.PREFETCH.ANY_LLC_MISS */ /* OFFCORE_RESPONSE.PREFETCH.ANY_LLC_MISS */
[ C(RESULT_MISS) ] = 0x01bb, [ C(RESULT_MISS) ] = 0x01b7,
}, },
}, },
[ C(DTLB) ] = { [ C(DTLB) ] = {
...@@ -352,16 +349,36 @@ static __initconst const u64 westmere_hw_cache_event_ids ...@@ -352,16 +349,36 @@ static __initconst const u64 westmere_hw_cache_event_ids
}; };
/* /*
* OFFCORE_RESPONSE MSR bits (subset), See IA32 SDM Vol 3 30.6.1.3 * Nehalem/Westmere MSR_OFFCORE_RESPONSE bits;
* See IA32 SDM Vol 3B 30.6.1.3
*/ */
#define DMND_DATA_RD (1 << 0) #define NHM_DMND_DATA_RD (1 << 0)
#define DMND_RFO (1 << 1) #define NHM_DMND_RFO (1 << 1)
#define DMND_WB (1 << 3) #define NHM_DMND_IFETCH (1 << 2)
#define PF_DATA_RD (1 << 4) #define NHM_DMND_WB (1 << 3)
#define PF_DATA_RFO (1 << 5) #define NHM_PF_DATA_RD (1 << 4)
#define RESP_UNCORE_HIT (1 << 8) #define NHM_PF_DATA_RFO (1 << 5)
#define RESP_MISS (0xf600) /* non uncore hit */ #define NHM_PF_IFETCH (1 << 6)
#define NHM_OFFCORE_OTHER (1 << 7)
#define NHM_UNCORE_HIT (1 << 8)
#define NHM_OTHER_CORE_HIT_SNP (1 << 9)
#define NHM_OTHER_CORE_HITM (1 << 10)
/* reserved */
#define NHM_REMOTE_CACHE_FWD (1 << 12)
#define NHM_REMOTE_DRAM (1 << 13)
#define NHM_LOCAL_DRAM (1 << 14)
#define NHM_NON_DRAM (1 << 15)
#define NHM_ALL_DRAM (NHM_REMOTE_DRAM|NHM_LOCAL_DRAM)
#define NHM_DMND_READ (NHM_DMND_DATA_RD)
#define NHM_DMND_WRITE (NHM_DMND_RFO|NHM_DMND_WB)
#define NHM_DMND_PREFETCH (NHM_PF_DATA_RD|NHM_PF_DATA_RFO)
#define NHM_L3_HIT (NHM_UNCORE_HIT|NHM_OTHER_CORE_HIT_SNP|NHM_OTHER_CORE_HITM)
#define NHM_L3_MISS (NHM_NON_DRAM|NHM_ALL_DRAM|NHM_REMOTE_CACHE_FWD)
#define NHM_L3_ACCESS (NHM_L3_HIT|NHM_L3_MISS)
static __initconst const u64 nehalem_hw_cache_extra_regs static __initconst const u64 nehalem_hw_cache_extra_regs
[PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_MAX]
...@@ -370,16 +387,16 @@ static __initconst const u64 nehalem_hw_cache_extra_regs ...@@ -370,16 +387,16 @@ static __initconst const u64 nehalem_hw_cache_extra_regs
{ {
[ C(LL ) ] = { [ C(LL ) ] = {
[ C(OP_READ) ] = { [ C(OP_READ) ] = {
[ C(RESULT_ACCESS) ] = DMND_DATA_RD|RESP_UNCORE_HIT, [ C(RESULT_ACCESS) ] = NHM_DMND_READ|NHM_L3_ACCESS,
[ C(RESULT_MISS) ] = DMND_DATA_RD|RESP_MISS, [ C(RESULT_MISS) ] = NHM_DMND_READ|NHM_L3_MISS,
}, },
[ C(OP_WRITE) ] = { [ C(OP_WRITE) ] = {
[ C(RESULT_ACCESS) ] = DMND_RFO|DMND_WB|RESP_UNCORE_HIT, [ C(RESULT_ACCESS) ] = NHM_DMND_WRITE|NHM_L3_ACCESS,
[ C(RESULT_MISS) ] = DMND_RFO|DMND_WB|RESP_MISS, [ C(RESULT_MISS) ] = NHM_DMND_WRITE|NHM_L3_MISS,
}, },
[ C(OP_PREFETCH) ] = { [ C(OP_PREFETCH) ] = {
[ C(RESULT_ACCESS) ] = PF_DATA_RD|PF_DATA_RFO|RESP_UNCORE_HIT, [ C(RESULT_ACCESS) ] = NHM_DMND_PREFETCH|NHM_L3_ACCESS,
[ C(RESULT_MISS) ] = PF_DATA_RD|PF_DATA_RFO|RESP_MISS, [ C(RESULT_MISS) ] = NHM_DMND_PREFETCH|NHM_L3_MISS,
}, },
} }
}; };
......
...@@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op, ...@@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
struct pt_regs *regs) struct pt_regs *regs)
{ {
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
unsigned long flags;
/* This is possible if op is under delayed unoptimizing */ /* This is possible if op is under delayed unoptimizing */
if (kprobe_disabled(&op->kp)) if (kprobe_disabled(&op->kp))
return; return;
preempt_disable(); local_irq_save(flags);
if (kprobe_running()) { if (kprobe_running()) {
kprobes_inc_nmissed_count(&op->kp); kprobes_inc_nmissed_count(&op->kp);
} else { } else {
...@@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op, ...@@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
opt_pre_handler(&op->kp, regs); opt_pre_handler(&op->kp, regs);
__this_cpu_write(current_kprobe, NULL); __this_cpu_write(current_kprobe, NULL);
} }
preempt_enable_no_resched(); local_irq_restore(flags);
} }
static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src) static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
......
...@@ -608,6 +608,9 @@ static int ptrace_write_dr7(struct task_struct *tsk, unsigned long data) ...@@ -608,6 +608,9 @@ static int ptrace_write_dr7(struct task_struct *tsk, unsigned long data)
unsigned len, type; unsigned len, type;
struct perf_event *bp; struct perf_event *bp;
if (ptrace_get_breakpoints(tsk) < 0)
return -ESRCH;
data &= ~DR_CONTROL_RESERVED; data &= ~DR_CONTROL_RESERVED;
old_dr7 = ptrace_get_dr7(thread->ptrace_bps); old_dr7 = ptrace_get_dr7(thread->ptrace_bps);
restore: restore:
...@@ -655,6 +658,9 @@ static int ptrace_write_dr7(struct task_struct *tsk, unsigned long data) ...@@ -655,6 +658,9 @@ static int ptrace_write_dr7(struct task_struct *tsk, unsigned long data)
} }
goto restore; goto restore;
} }
ptrace_put_breakpoints(tsk);
return ((orig_ret < 0) ? orig_ret : rc); return ((orig_ret < 0) ? orig_ret : rc);
} }
...@@ -668,10 +674,17 @@ static unsigned long ptrace_get_debugreg(struct task_struct *tsk, int n) ...@@ -668,10 +674,17 @@ static unsigned long ptrace_get_debugreg(struct task_struct *tsk, int n)
if (n < HBP_NUM) { if (n < HBP_NUM) {
struct perf_event *bp; struct perf_event *bp;
if (ptrace_get_breakpoints(tsk) < 0)
return -ESRCH;
bp = thread->ptrace_bps[n]; bp = thread->ptrace_bps[n];
if (!bp) if (!bp)
return 0; val = 0;
val = bp->hw.info.address; else
val = bp->hw.info.address;
ptrace_put_breakpoints(tsk);
} else if (n == 6) { } else if (n == 6) {
val = thread->debugreg6; val = thread->debugreg6;
} else if (n == 7) { } else if (n == 7) {
...@@ -686,6 +699,10 @@ static int ptrace_set_breakpoint_addr(struct task_struct *tsk, int nr, ...@@ -686,6 +699,10 @@ static int ptrace_set_breakpoint_addr(struct task_struct *tsk, int nr,
struct perf_event *bp; struct perf_event *bp;
struct thread_struct *t = &tsk->thread; struct thread_struct *t = &tsk->thread;
struct perf_event_attr attr; struct perf_event_attr attr;
int err = 0;
if (ptrace_get_breakpoints(tsk) < 0)
return -ESRCH;
if (!t->ptrace_bps[nr]) { if (!t->ptrace_bps[nr]) {
ptrace_breakpoint_init(&attr); ptrace_breakpoint_init(&attr);
...@@ -709,24 +726,23 @@ static int ptrace_set_breakpoint_addr(struct task_struct *tsk, int nr, ...@@ -709,24 +726,23 @@ static int ptrace_set_breakpoint_addr(struct task_struct *tsk, int nr,
* writing for the user. And anyway this is the previous * writing for the user. And anyway this is the previous
* behaviour. * behaviour.
*/ */
if (IS_ERR(bp)) if (IS_ERR(bp)) {
return PTR_ERR(bp); err = PTR_ERR(bp);
goto put;
}
t->ptrace_bps[nr] = bp; t->ptrace_bps[nr] = bp;
} else { } else {
int err;
bp = t->ptrace_bps[nr]; bp = t->ptrace_bps[nr];
attr = bp->attr; attr = bp->attr;
attr.bp_addr = addr; attr.bp_addr = addr;
err = modify_user_hw_breakpoint(bp, &attr); err = modify_user_hw_breakpoint(bp, &attr);
if (err)
return err;
} }
put:
return 0; ptrace_put_breakpoints(tsk);
return err;
} }
/* /*
......
...@@ -61,6 +61,10 @@ struct x86_init_ops x86_init __initdata = { ...@@ -61,6 +61,10 @@ struct x86_init_ops x86_init __initdata = {
.banner = default_banner, .banner = default_banner,
}, },
.mapping = {
.pagetable_reserve = native_pagetable_reserve,
},
.paging = { .paging = {
.pagetable_setup_start = native_pagetable_setup_start, .pagetable_setup_start = native_pagetable_setup_start,
.pagetable_setup_done = native_pagetable_setup_done, .pagetable_setup_done = native_pagetable_setup_done,
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册