提交 0b42f25d 编写于 作者: D David S. Miller

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

udplite conflict is resolved by taking what 'net-next' did
which removed the backlog receive method assignment, since
it is no longer necessary.

Two entries were added to the non-priv ethtool operations
switch statement, one in 'net' and one in 'net-next, so
simple overlapping changes.
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
...@@ -77,6 +77,7 @@ Descriptions of section entries: ...@@ -77,6 +77,7 @@ Descriptions of section entries:
Q: Patchwork web based patch tracking system site Q: Patchwork web based patch tracking system site
T: SCM tree type and location. T: SCM tree type and location.
Type is one of: git, hg, quilt, stgit, topgit Type is one of: git, hg, quilt, stgit, topgit
B: Bug tracking system location.
S: Status, one of the following: S: Status, one of the following:
Supported: Someone is actually paid to look after this. Supported: Someone is actually paid to look after this.
Maintained: Someone actually looks after it. Maintained: Someone actually looks after it.
...@@ -281,6 +282,7 @@ L: linux-acpi@vger.kernel.org ...@@ -281,6 +282,7 @@ L: linux-acpi@vger.kernel.org
W: https://01.org/linux-acpi W: https://01.org/linux-acpi
Q: https://patchwork.kernel.org/project/linux-acpi/list/ Q: https://patchwork.kernel.org/project/linux-acpi/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
B: https://bugzilla.kernel.org
S: Supported S: Supported
F: drivers/acpi/ F: drivers/acpi/
F: drivers/pnp/pnpacpi/ F: drivers/pnp/pnpacpi/
...@@ -304,6 +306,8 @@ W: https://acpica.org/ ...@@ -304,6 +306,8 @@ W: https://acpica.org/
W: https://github.com/acpica/acpica/ W: https://github.com/acpica/acpica/
Q: https://patchwork.kernel.org/project/linux-acpi/list/ Q: https://patchwork.kernel.org/project/linux-acpi/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
B: https://bugzilla.kernel.org
B: https://bugs.acpica.org
S: Supported S: Supported
F: drivers/acpi/acpica/ F: drivers/acpi/acpica/
F: include/acpi/ F: include/acpi/
...@@ -313,6 +317,7 @@ ACPI FAN DRIVER ...@@ -313,6 +317,7 @@ ACPI FAN DRIVER
M: Zhang Rui <rui.zhang@intel.com> M: Zhang Rui <rui.zhang@intel.com>
L: linux-acpi@vger.kernel.org L: linux-acpi@vger.kernel.org
W: https://01.org/linux-acpi W: https://01.org/linux-acpi
B: https://bugzilla.kernel.org
S: Supported S: Supported
F: drivers/acpi/fan.c F: drivers/acpi/fan.c
...@@ -328,6 +333,7 @@ ACPI THERMAL DRIVER ...@@ -328,6 +333,7 @@ ACPI THERMAL DRIVER
M: Zhang Rui <rui.zhang@intel.com> M: Zhang Rui <rui.zhang@intel.com>
L: linux-acpi@vger.kernel.org L: linux-acpi@vger.kernel.org
W: https://01.org/linux-acpi W: https://01.org/linux-acpi
B: https://bugzilla.kernel.org
S: Supported S: Supported
F: drivers/acpi/*thermal* F: drivers/acpi/*thermal*
...@@ -335,6 +341,7 @@ ACPI VIDEO DRIVER ...@@ -335,6 +341,7 @@ ACPI VIDEO DRIVER
M: Zhang Rui <rui.zhang@intel.com> M: Zhang Rui <rui.zhang@intel.com>
L: linux-acpi@vger.kernel.org L: linux-acpi@vger.kernel.org
W: https://01.org/linux-acpi W: https://01.org/linux-acpi
B: https://bugzilla.kernel.org
S: Supported S: Supported
F: drivers/acpi/acpi_video.c F: drivers/acpi/acpi_video.c
...@@ -5665,6 +5672,7 @@ HIBERNATION (aka Software Suspend, aka swsusp) ...@@ -5665,6 +5672,7 @@ HIBERNATION (aka Software Suspend, aka swsusp)
M: "Rafael J. Wysocki" <rjw@rjwysocki.net> M: "Rafael J. Wysocki" <rjw@rjwysocki.net>
M: Pavel Machek <pavel@ucw.cz> M: Pavel Machek <pavel@ucw.cz>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
B: https://bugzilla.kernel.org
S: Supported S: Supported
F: arch/x86/power/ F: arch/x86/power/
F: drivers/base/power/ F: drivers/base/power/
...@@ -9625,6 +9633,7 @@ POWER MANAGEMENT CORE ...@@ -9625,6 +9633,7 @@ POWER MANAGEMENT CORE
M: "Rafael J. Wysocki" <rjw@rjwysocki.net> M: "Rafael J. Wysocki" <rjw@rjwysocki.net>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
B: https://bugzilla.kernel.org
S: Supported S: Supported
F: drivers/base/power/ F: drivers/base/power/
F: include/linux/pm.h F: include/linux/pm.h
...@@ -11614,6 +11623,7 @@ M: "Rafael J. Wysocki" <rjw@rjwysocki.net> ...@@ -11614,6 +11623,7 @@ M: "Rafael J. Wysocki" <rjw@rjwysocki.net>
M: Len Brown <len.brown@intel.com> M: Len Brown <len.brown@intel.com>
M: Pavel Machek <pavel@ucw.cz> M: Pavel Machek <pavel@ucw.cz>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
B: https://bugzilla.kernel.org
S: Supported S: Supported
F: Documentation/power/ F: Documentation/power/
F: arch/x86/kernel/acpi/ F: arch/x86/kernel/acpi/
......
...@@ -8,7 +8,6 @@ generic-y += early_ioremap.h ...@@ -8,7 +8,6 @@ generic-y += early_ioremap.h
generic-y += emergency-restart.h generic-y += emergency-restart.h
generic-y += errno.h generic-y += errno.h
generic-y += exec.h generic-y += exec.h
generic-y += export.h
generic-y += ioctl.h generic-y += ioctl.h
generic-y += ipcbuf.h generic-y += ipcbuf.h
generic-y += irq_regs.h generic-y += irq_regs.h
......
...@@ -33,7 +33,7 @@ endif ...@@ -33,7 +33,7 @@ endif
obj-$(CONFIG_CPU_IDLE) += cpuidle.o obj-$(CONFIG_CPU_IDLE) += cpuidle.o
obj-$(CONFIG_ISA_DMA_API) += dma.o obj-$(CONFIG_ISA_DMA_API) += dma.o
obj-$(CONFIG_FIQ) += fiq.o fiqasm.o obj-$(CONFIG_FIQ) += fiq.o fiqasm.o
obj-$(CONFIG_MODULES) += module.o obj-$(CONFIG_MODULES) += armksyms.o module.o
obj-$(CONFIG_ARM_MODULE_PLTS) += module-plts.o obj-$(CONFIG_ARM_MODULE_PLTS) += module-plts.o
obj-$(CONFIG_ISA_DMA) += dma-isa.o obj-$(CONFIG_ISA_DMA) += dma-isa.o
obj-$(CONFIG_PCI) += bios32.o isa.o obj-$(CONFIG_PCI) += bios32.o isa.o
......
/*
* linux/arch/arm/kernel/armksyms.c
*
* Copyright (C) 2000 Russell King
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/export.h>
#include <linux/sched.h>
#include <linux/string.h>
#include <linux/cryptohash.h>
#include <linux/delay.h>
#include <linux/in6.h>
#include <linux/syscalls.h>
#include <linux/uaccess.h>
#include <linux/io.h>
#include <linux/arm-smccc.h>
#include <asm/checksum.h>
#include <asm/ftrace.h>
/*
* libgcc functions - functions that are used internally by the
* compiler... (prototypes are not correct though, but that
* doesn't really matter since they're not versioned).
*/
extern void __ashldi3(void);
extern void __ashrdi3(void);
extern void __divsi3(void);
extern void __lshrdi3(void);
extern void __modsi3(void);
extern void __muldi3(void);
extern void __ucmpdi2(void);
extern void __udivsi3(void);
extern void __umodsi3(void);
extern void __do_div64(void);
extern void __bswapsi2(void);
extern void __bswapdi2(void);
extern void __aeabi_idiv(void);
extern void __aeabi_idivmod(void);
extern void __aeabi_lasr(void);
extern void __aeabi_llsl(void);
extern void __aeabi_llsr(void);
extern void __aeabi_lmul(void);
extern void __aeabi_uidiv(void);
extern void __aeabi_uidivmod(void);
extern void __aeabi_ulcmp(void);
extern void fpundefinstr(void);
void mmioset(void *, unsigned int, size_t);
void mmiocpy(void *, const void *, size_t);
/* platform dependent support */
EXPORT_SYMBOL(arm_delay_ops);
/* networking */
EXPORT_SYMBOL(csum_partial);
EXPORT_SYMBOL(csum_partial_copy_from_user);
EXPORT_SYMBOL(csum_partial_copy_nocheck);
EXPORT_SYMBOL(__csum_ipv6_magic);
/* io */
#ifndef __raw_readsb
EXPORT_SYMBOL(__raw_readsb);
#endif
#ifndef __raw_readsw
EXPORT_SYMBOL(__raw_readsw);
#endif
#ifndef __raw_readsl
EXPORT_SYMBOL(__raw_readsl);
#endif
#ifndef __raw_writesb
EXPORT_SYMBOL(__raw_writesb);
#endif
#ifndef __raw_writesw
EXPORT_SYMBOL(__raw_writesw);
#endif
#ifndef __raw_writesl
EXPORT_SYMBOL(__raw_writesl);
#endif
/* string / mem functions */
EXPORT_SYMBOL(strchr);
EXPORT_SYMBOL(strrchr);
EXPORT_SYMBOL(memset);
EXPORT_SYMBOL(memcpy);
EXPORT_SYMBOL(memmove);
EXPORT_SYMBOL(memchr);
EXPORT_SYMBOL(__memzero);
EXPORT_SYMBOL(mmioset);
EXPORT_SYMBOL(mmiocpy);
#ifdef CONFIG_MMU
EXPORT_SYMBOL(copy_page);
EXPORT_SYMBOL(arm_copy_from_user);
EXPORT_SYMBOL(arm_copy_to_user);
EXPORT_SYMBOL(arm_clear_user);
EXPORT_SYMBOL(__get_user_1);
EXPORT_SYMBOL(__get_user_2);
EXPORT_SYMBOL(__get_user_4);
EXPORT_SYMBOL(__get_user_8);
#ifdef __ARMEB__
EXPORT_SYMBOL(__get_user_64t_1);
EXPORT_SYMBOL(__get_user_64t_2);
EXPORT_SYMBOL(__get_user_64t_4);
EXPORT_SYMBOL(__get_user_32t_8);
#endif
EXPORT_SYMBOL(__put_user_1);
EXPORT_SYMBOL(__put_user_2);
EXPORT_SYMBOL(__put_user_4);
EXPORT_SYMBOL(__put_user_8);
#endif
/* gcc lib functions */
EXPORT_SYMBOL(__ashldi3);
EXPORT_SYMBOL(__ashrdi3);
EXPORT_SYMBOL(__divsi3);
EXPORT_SYMBOL(__lshrdi3);
EXPORT_SYMBOL(__modsi3);
EXPORT_SYMBOL(__muldi3);
EXPORT_SYMBOL(__ucmpdi2);
EXPORT_SYMBOL(__udivsi3);
EXPORT_SYMBOL(__umodsi3);
EXPORT_SYMBOL(__do_div64);
EXPORT_SYMBOL(__bswapsi2);
EXPORT_SYMBOL(__bswapdi2);
#ifdef CONFIG_AEABI
EXPORT_SYMBOL(__aeabi_idiv);
EXPORT_SYMBOL(__aeabi_idivmod);
EXPORT_SYMBOL(__aeabi_lasr);
EXPORT_SYMBOL(__aeabi_llsl);
EXPORT_SYMBOL(__aeabi_llsr);
EXPORT_SYMBOL(__aeabi_lmul);
EXPORT_SYMBOL(__aeabi_uidiv);
EXPORT_SYMBOL(__aeabi_uidivmod);
EXPORT_SYMBOL(__aeabi_ulcmp);
#endif
/* bitops */
EXPORT_SYMBOL(_set_bit);
EXPORT_SYMBOL(_test_and_set_bit);
EXPORT_SYMBOL(_clear_bit);
EXPORT_SYMBOL(_test_and_clear_bit);
EXPORT_SYMBOL(_change_bit);
EXPORT_SYMBOL(_test_and_change_bit);
EXPORT_SYMBOL(_find_first_zero_bit_le);
EXPORT_SYMBOL(_find_next_zero_bit_le);
EXPORT_SYMBOL(_find_first_bit_le);
EXPORT_SYMBOL(_find_next_bit_le);
#ifdef __ARMEB__
EXPORT_SYMBOL(_find_first_zero_bit_be);
EXPORT_SYMBOL(_find_next_zero_bit_be);
EXPORT_SYMBOL(_find_first_bit_be);
EXPORT_SYMBOL(_find_next_bit_be);
#endif
#ifdef CONFIG_FUNCTION_TRACER
#ifdef CONFIG_OLD_MCOUNT
EXPORT_SYMBOL(mcount);
#endif
EXPORT_SYMBOL(__gnu_mcount_nc);
#endif
#ifdef CONFIG_ARM_PATCH_PHYS_VIRT
EXPORT_SYMBOL(__pv_phys_pfn_offset);
EXPORT_SYMBOL(__pv_offset);
#endif
#ifdef CONFIG_HAVE_ARM_SMCCC
EXPORT_SYMBOL(arm_smccc_smc);
EXPORT_SYMBOL(arm_smccc_hvc);
#endif
...@@ -7,7 +7,6 @@ ...@@ -7,7 +7,6 @@
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/ftrace.h> #include <asm/ftrace.h>
#include <asm/unwind.h> #include <asm/unwind.h>
#include <asm/export.h>
#include "entry-header.S" #include "entry-header.S"
...@@ -154,7 +153,6 @@ ENTRY(mcount) ...@@ -154,7 +153,6 @@ ENTRY(mcount)
__mcount _old __mcount _old
#endif #endif
ENDPROC(mcount) ENDPROC(mcount)
EXPORT_SYMBOL(mcount)
#ifdef CONFIG_DYNAMIC_FTRACE #ifdef CONFIG_DYNAMIC_FTRACE
ENTRY(ftrace_caller_old) ENTRY(ftrace_caller_old)
...@@ -207,7 +205,6 @@ UNWIND(.fnstart) ...@@ -207,7 +205,6 @@ UNWIND(.fnstart)
#endif #endif
UNWIND(.fnend) UNWIND(.fnend)
ENDPROC(__gnu_mcount_nc) ENDPROC(__gnu_mcount_nc)
EXPORT_SYMBOL(__gnu_mcount_nc)
#ifdef CONFIG_DYNAMIC_FTRACE #ifdef CONFIG_DYNAMIC_FTRACE
ENTRY(ftrace_caller) ENTRY(ftrace_caller)
......
...@@ -22,7 +22,6 @@ ...@@ -22,7 +22,6 @@
#include <asm/memory.h> #include <asm/memory.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/export.h>
#if defined(CONFIG_DEBUG_LL) && !defined(CONFIG_DEBUG_SEMIHOSTING) #if defined(CONFIG_DEBUG_LL) && !defined(CONFIG_DEBUG_SEMIHOSTING)
#include CONFIG_DEBUG_LL_INCLUDE #include CONFIG_DEBUG_LL_INCLUDE
...@@ -728,8 +727,6 @@ __pv_phys_pfn_offset: ...@@ -728,8 +727,6 @@ __pv_phys_pfn_offset:
__pv_offset: __pv_offset:
.quad 0 .quad 0
.size __pv_offset, . -__pv_offset .size __pv_offset, . -__pv_offset
EXPORT_SYMBOL(__pv_phys_pfn_offset)
EXPORT_SYMBOL(__pv_offset)
#endif #endif
#include "head-common.S" #include "head-common.S"
...@@ -16,7 +16,6 @@ ...@@ -16,7 +16,6 @@
#include <asm/opcodes-sec.h> #include <asm/opcodes-sec.h>
#include <asm/opcodes-virt.h> #include <asm/opcodes-virt.h>
#include <asm/unwind.h> #include <asm/unwind.h>
#include <asm/export.h>
/* /*
* Wrap c macros in asm macros to delay expansion until after the * Wrap c macros in asm macros to delay expansion until after the
...@@ -52,7 +51,6 @@ UNWIND( .fnend) ...@@ -52,7 +51,6 @@ UNWIND( .fnend)
ENTRY(arm_smccc_smc) ENTRY(arm_smccc_smc)
SMCCC SMCCC_SMC SMCCC SMCCC_SMC
ENDPROC(arm_smccc_smc) ENDPROC(arm_smccc_smc)
EXPORT_SYMBOL(arm_smccc_smc)
/* /*
* void smccc_hvc(unsigned long a0, unsigned long a1, unsigned long a2, * void smccc_hvc(unsigned long a0, unsigned long a1, unsigned long a2,
...@@ -62,4 +60,3 @@ EXPORT_SYMBOL(arm_smccc_smc) ...@@ -62,4 +60,3 @@ EXPORT_SYMBOL(arm_smccc_smc)
ENTRY(arm_smccc_hvc) ENTRY(arm_smccc_hvc)
SMCCC SMCCC_HVC SMCCC SMCCC_HVC
ENDPROC(arm_smccc_hvc) ENDPROC(arm_smccc_hvc)
EXPORT_SYMBOL(arm_smccc_hvc)
...@@ -28,7 +28,6 @@ Boston, MA 02110-1301, USA. */ ...@@ -28,7 +28,6 @@ Boston, MA 02110-1301, USA. */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
#ifdef __ARMEB__ #ifdef __ARMEB__
#define al r1 #define al r1
...@@ -53,5 +52,3 @@ ENTRY(__aeabi_llsl) ...@@ -53,5 +52,3 @@ ENTRY(__aeabi_llsl)
ENDPROC(__ashldi3) ENDPROC(__ashldi3)
ENDPROC(__aeabi_llsl) ENDPROC(__aeabi_llsl)
EXPORT_SYMBOL(__ashldi3)
EXPORT_SYMBOL(__aeabi_llsl)
...@@ -28,7 +28,6 @@ Boston, MA 02110-1301, USA. */ ...@@ -28,7 +28,6 @@ Boston, MA 02110-1301, USA. */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
#ifdef __ARMEB__ #ifdef __ARMEB__
#define al r1 #define al r1
...@@ -53,5 +52,3 @@ ENTRY(__aeabi_lasr) ...@@ -53,5 +52,3 @@ ENTRY(__aeabi_lasr)
ENDPROC(__ashrdi3) ENDPROC(__ashrdi3)
ENDPROC(__aeabi_lasr) ENDPROC(__aeabi_lasr)
EXPORT_SYMBOL(__ashrdi3)
EXPORT_SYMBOL(__aeabi_lasr)
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/unwind.h> #include <asm/unwind.h>
#include <asm/export.h>
#if __LINUX_ARM_ARCH__ >= 6 #if __LINUX_ARM_ARCH__ >= 6
.macro bitop, name, instr .macro bitop, name, instr
...@@ -26,7 +25,6 @@ UNWIND( .fnstart ) ...@@ -26,7 +25,6 @@ UNWIND( .fnstart )
bx lr bx lr
UNWIND( .fnend ) UNWIND( .fnend )
ENDPROC(\name ) ENDPROC(\name )
EXPORT_SYMBOL(\name )
.endm .endm
.macro testop, name, instr, store .macro testop, name, instr, store
...@@ -57,7 +55,6 @@ UNWIND( .fnstart ) ...@@ -57,7 +55,6 @@ UNWIND( .fnstart )
2: bx lr 2: bx lr
UNWIND( .fnend ) UNWIND( .fnend )
ENDPROC(\name ) ENDPROC(\name )
EXPORT_SYMBOL(\name )
.endm .endm
#else #else
.macro bitop, name, instr .macro bitop, name, instr
...@@ -77,7 +74,6 @@ UNWIND( .fnstart ) ...@@ -77,7 +74,6 @@ UNWIND( .fnstart )
ret lr ret lr
UNWIND( .fnend ) UNWIND( .fnend )
ENDPROC(\name ) ENDPROC(\name )
EXPORT_SYMBOL(\name )
.endm .endm
/** /**
...@@ -106,6 +102,5 @@ UNWIND( .fnstart ) ...@@ -106,6 +102,5 @@ UNWIND( .fnstart )
ret lr ret lr
UNWIND( .fnend ) UNWIND( .fnend )
ENDPROC(\name ) ENDPROC(\name )
EXPORT_SYMBOL(\name )
.endm .endm
#endif #endif
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
#if __LINUX_ARM_ARCH__ >= 6 #if __LINUX_ARM_ARCH__ >= 6
ENTRY(__bswapsi2) ENTRY(__bswapsi2)
...@@ -36,5 +35,3 @@ ENTRY(__bswapdi2) ...@@ -36,5 +35,3 @@ ENTRY(__bswapdi2)
ret lr ret lr
ENDPROC(__bswapdi2) ENDPROC(__bswapdi2)
#endif #endif
EXPORT_SYMBOL(__bswapsi2)
EXPORT_SYMBOL(__bswapdi2)
...@@ -10,7 +10,6 @@ ...@@ -10,7 +10,6 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/unwind.h> #include <asm/unwind.h>
#include <asm/export.h>
.text .text
...@@ -51,9 +50,6 @@ USER( strnebt r2, [r0]) ...@@ -51,9 +50,6 @@ USER( strnebt r2, [r0])
UNWIND(.fnend) UNWIND(.fnend)
ENDPROC(arm_clear_user) ENDPROC(arm_clear_user)
ENDPROC(__clear_user_std) ENDPROC(__clear_user_std)
#ifndef CONFIG_UACCESS_WITH_MEMCPY
EXPORT_SYMBOL(arm_clear_user)
#endif
.pushsection .text.fixup,"ax" .pushsection .text.fixup,"ax"
.align 0 .align 0
......
...@@ -13,7 +13,6 @@ ...@@ -13,7 +13,6 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/unwind.h> #include <asm/unwind.h>
#include <asm/export.h>
/* /*
* Prototype: * Prototype:
...@@ -95,7 +94,6 @@ ENTRY(arm_copy_from_user) ...@@ -95,7 +94,6 @@ ENTRY(arm_copy_from_user)
#include "copy_template.S" #include "copy_template.S"
ENDPROC(arm_copy_from_user) ENDPROC(arm_copy_from_user)
EXPORT_SYMBOL(arm_copy_from_user)
.pushsection .fixup,"ax" .pushsection .fixup,"ax"
.align 0 .align 0
......
...@@ -13,7 +13,6 @@ ...@@ -13,7 +13,6 @@
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/export.h>
#define COPY_COUNT (PAGE_SZ / (2 * L1_CACHE_BYTES) PLD( -1 )) #define COPY_COUNT (PAGE_SZ / (2 * L1_CACHE_BYTES) PLD( -1 ))
...@@ -46,4 +45,3 @@ ENTRY(copy_page) ...@@ -46,4 +45,3 @@ ENTRY(copy_page)
PLD( beq 2b ) PLD( beq 2b )
ldmfd sp!, {r4, pc} @ 3 ldmfd sp!, {r4, pc} @ 3
ENDPROC(copy_page) ENDPROC(copy_page)
EXPORT_SYMBOL(copy_page)
...@@ -13,7 +13,6 @@ ...@@ -13,7 +13,6 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/unwind.h> #include <asm/unwind.h>
#include <asm/export.h>
/* /*
* Prototype: * Prototype:
...@@ -100,9 +99,6 @@ WEAK(arm_copy_to_user) ...@@ -100,9 +99,6 @@ WEAK(arm_copy_to_user)
ENDPROC(arm_copy_to_user) ENDPROC(arm_copy_to_user)
ENDPROC(__copy_to_user_std) ENDPROC(__copy_to_user_std)
#ifndef CONFIG_UACCESS_WITH_MEMCPY
EXPORT_SYMBOL(arm_copy_to_user)
#endif
.pushsection .text.fixup,"ax" .pushsection .text.fixup,"ax"
.align 0 .align 0
......
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
.text .text
...@@ -31,4 +30,4 @@ ENTRY(__csum_ipv6_magic) ...@@ -31,4 +30,4 @@ ENTRY(__csum_ipv6_magic)
adcs r0, r0, #0 adcs r0, r0, #0
ldmfd sp!, {pc} ldmfd sp!, {pc}
ENDPROC(__csum_ipv6_magic) ENDPROC(__csum_ipv6_magic)
EXPORT_SYMBOL(__csum_ipv6_magic)
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
.text .text
...@@ -141,4 +140,3 @@ ENTRY(csum_partial) ...@@ -141,4 +140,3 @@ ENTRY(csum_partial)
bne 4b bne 4b
b .Lless4 b .Lless4
ENDPROC(csum_partial) ENDPROC(csum_partial)
EXPORT_SYMBOL(csum_partial)
...@@ -49,6 +49,5 @@ ...@@ -49,6 +49,5 @@
#define FN_ENTRY ENTRY(csum_partial_copy_nocheck) #define FN_ENTRY ENTRY(csum_partial_copy_nocheck)
#define FN_EXIT ENDPROC(csum_partial_copy_nocheck) #define FN_EXIT ENDPROC(csum_partial_copy_nocheck)
#define FN_EXPORT EXPORT_SYMBOL(csum_partial_copy_nocheck)
#include "csumpartialcopygeneric.S" #include "csumpartialcopygeneric.S"
...@@ -8,7 +8,6 @@ ...@@ -8,7 +8,6 @@
* published by the Free Software Foundation. * published by the Free Software Foundation.
*/ */
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
/* /*
* unsigned int * unsigned int
...@@ -332,4 +331,3 @@ FN_ENTRY ...@@ -332,4 +331,3 @@ FN_ENTRY
mov r5, r4, get_byte_1 mov r5, r4, get_byte_1
b .Lexit b .Lexit
FN_EXIT FN_EXIT
FN_EXPORT
...@@ -73,7 +73,6 @@ ...@@ -73,7 +73,6 @@
#define FN_ENTRY ENTRY(csum_partial_copy_from_user) #define FN_ENTRY ENTRY(csum_partial_copy_from_user)
#define FN_EXIT ENDPROC(csum_partial_copy_from_user) #define FN_EXIT ENDPROC(csum_partial_copy_from_user)
#define FN_EXPORT EXPORT_SYMBOL(csum_partial_copy_from_user)
#include "csumpartialcopygeneric.S" #include "csumpartialcopygeneric.S"
......
...@@ -24,7 +24,6 @@ ...@@ -24,7 +24,6 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/export.h>
#include <linux/timex.h> #include <linux/timex.h>
/* /*
...@@ -35,7 +34,6 @@ struct arm_delay_ops arm_delay_ops __ro_after_init = { ...@@ -35,7 +34,6 @@ struct arm_delay_ops arm_delay_ops __ro_after_init = {
.const_udelay = __loop_const_udelay, .const_udelay = __loop_const_udelay,
.udelay = __loop_udelay, .udelay = __loop_udelay,
}; };
EXPORT_SYMBOL(arm_delay_ops);
static const struct delay_timer *delay_timer; static const struct delay_timer *delay_timer;
static bool delay_calibrated; static bool delay_calibrated;
......
...@@ -15,7 +15,6 @@ ...@@ -15,7 +15,6 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/unwind.h> #include <asm/unwind.h>
#include <asm/export.h>
#ifdef __ARMEB__ #ifdef __ARMEB__
#define xh r0 #define xh r0
...@@ -211,4 +210,3 @@ Ldiv0_64: ...@@ -211,4 +210,3 @@ Ldiv0_64:
UNWIND(.fnend) UNWIND(.fnend)
ENDPROC(__do_div64) ENDPROC(__do_div64)
EXPORT_SYMBOL(__do_div64)
...@@ -15,7 +15,6 @@ ...@@ -15,7 +15,6 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
.text .text
/* /*
...@@ -38,7 +37,6 @@ ENTRY(_find_first_zero_bit_le) ...@@ -38,7 +37,6 @@ ENTRY(_find_first_zero_bit_le)
3: mov r0, r1 @ no free bits 3: mov r0, r1 @ no free bits
ret lr ret lr
ENDPROC(_find_first_zero_bit_le) ENDPROC(_find_first_zero_bit_le)
EXPORT_SYMBOL(_find_first_zero_bit_le)
/* /*
* Purpose : Find next 'zero' bit * Purpose : Find next 'zero' bit
...@@ -59,7 +57,6 @@ ENTRY(_find_next_zero_bit_le) ...@@ -59,7 +57,6 @@ ENTRY(_find_next_zero_bit_le)
add r2, r2, #1 @ align bit pointer add r2, r2, #1 @ align bit pointer
b 2b @ loop for next bit b 2b @ loop for next bit
ENDPROC(_find_next_zero_bit_le) ENDPROC(_find_next_zero_bit_le)
EXPORT_SYMBOL(_find_next_zero_bit_le)
/* /*
* Purpose : Find a 'one' bit * Purpose : Find a 'one' bit
...@@ -81,7 +78,6 @@ ENTRY(_find_first_bit_le) ...@@ -81,7 +78,6 @@ ENTRY(_find_first_bit_le)
3: mov r0, r1 @ no free bits 3: mov r0, r1 @ no free bits
ret lr ret lr
ENDPROC(_find_first_bit_le) ENDPROC(_find_first_bit_le)
EXPORT_SYMBOL(_find_first_bit_le)
/* /*
* Purpose : Find next 'one' bit * Purpose : Find next 'one' bit
...@@ -101,7 +97,6 @@ ENTRY(_find_next_bit_le) ...@@ -101,7 +97,6 @@ ENTRY(_find_next_bit_le)
add r2, r2, #1 @ align bit pointer add r2, r2, #1 @ align bit pointer
b 2b @ loop for next bit b 2b @ loop for next bit
ENDPROC(_find_next_bit_le) ENDPROC(_find_next_bit_le)
EXPORT_SYMBOL(_find_next_bit_le)
#ifdef __ARMEB__ #ifdef __ARMEB__
...@@ -121,7 +116,6 @@ ENTRY(_find_first_zero_bit_be) ...@@ -121,7 +116,6 @@ ENTRY(_find_first_zero_bit_be)
3: mov r0, r1 @ no free bits 3: mov r0, r1 @ no free bits
ret lr ret lr
ENDPROC(_find_first_zero_bit_be) ENDPROC(_find_first_zero_bit_be)
EXPORT_SYMBOL(_find_first_zero_bit_be)
ENTRY(_find_next_zero_bit_be) ENTRY(_find_next_zero_bit_be)
teq r1, #0 teq r1, #0
...@@ -139,7 +133,6 @@ ENTRY(_find_next_zero_bit_be) ...@@ -139,7 +133,6 @@ ENTRY(_find_next_zero_bit_be)
add r2, r2, #1 @ align bit pointer add r2, r2, #1 @ align bit pointer
b 2b @ loop for next bit b 2b @ loop for next bit
ENDPROC(_find_next_zero_bit_be) ENDPROC(_find_next_zero_bit_be)
EXPORT_SYMBOL(_find_next_zero_bit_be)
ENTRY(_find_first_bit_be) ENTRY(_find_first_bit_be)
teq r1, #0 teq r1, #0
...@@ -157,7 +150,6 @@ ENTRY(_find_first_bit_be) ...@@ -157,7 +150,6 @@ ENTRY(_find_first_bit_be)
3: mov r0, r1 @ no free bits 3: mov r0, r1 @ no free bits
ret lr ret lr
ENDPROC(_find_first_bit_be) ENDPROC(_find_first_bit_be)
EXPORT_SYMBOL(_find_first_bit_be)
ENTRY(_find_next_bit_be) ENTRY(_find_next_bit_be)
teq r1, #0 teq r1, #0
...@@ -174,7 +166,6 @@ ENTRY(_find_next_bit_be) ...@@ -174,7 +166,6 @@ ENTRY(_find_next_bit_be)
add r2, r2, #1 @ align bit pointer add r2, r2, #1 @ align bit pointer
b 2b @ loop for next bit b 2b @ loop for next bit
ENDPROC(_find_next_bit_be) ENDPROC(_find_next_bit_be)
EXPORT_SYMBOL(_find_next_bit_be)
#endif #endif
......
...@@ -31,7 +31,6 @@ ...@@ -31,7 +31,6 @@
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/errno.h> #include <asm/errno.h>
#include <asm/domain.h> #include <asm/domain.h>
#include <asm/export.h>
ENTRY(__get_user_1) ENTRY(__get_user_1)
check_uaccess r0, 1, r1, r2, __get_user_bad check_uaccess r0, 1, r1, r2, __get_user_bad
...@@ -39,7 +38,6 @@ ENTRY(__get_user_1) ...@@ -39,7 +38,6 @@ ENTRY(__get_user_1)
mov r0, #0 mov r0, #0
ret lr ret lr
ENDPROC(__get_user_1) ENDPROC(__get_user_1)
EXPORT_SYMBOL(__get_user_1)
ENTRY(__get_user_2) ENTRY(__get_user_2)
check_uaccess r0, 2, r1, r2, __get_user_bad check_uaccess r0, 2, r1, r2, __get_user_bad
...@@ -60,7 +58,6 @@ rb .req r0 ...@@ -60,7 +58,6 @@ rb .req r0
mov r0, #0 mov r0, #0
ret lr ret lr
ENDPROC(__get_user_2) ENDPROC(__get_user_2)
EXPORT_SYMBOL(__get_user_2)
ENTRY(__get_user_4) ENTRY(__get_user_4)
check_uaccess r0, 4, r1, r2, __get_user_bad check_uaccess r0, 4, r1, r2, __get_user_bad
...@@ -68,7 +65,6 @@ ENTRY(__get_user_4) ...@@ -68,7 +65,6 @@ ENTRY(__get_user_4)
mov r0, #0 mov r0, #0
ret lr ret lr
ENDPROC(__get_user_4) ENDPROC(__get_user_4)
EXPORT_SYMBOL(__get_user_4)
ENTRY(__get_user_8) ENTRY(__get_user_8)
check_uaccess r0, 8, r1, r2, __get_user_bad check_uaccess r0, 8, r1, r2, __get_user_bad
...@@ -82,7 +78,6 @@ ENTRY(__get_user_8) ...@@ -82,7 +78,6 @@ ENTRY(__get_user_8)
mov r0, #0 mov r0, #0
ret lr ret lr
ENDPROC(__get_user_8) ENDPROC(__get_user_8)
EXPORT_SYMBOL(__get_user_8)
#ifdef __ARMEB__ #ifdef __ARMEB__
ENTRY(__get_user_32t_8) ENTRY(__get_user_32t_8)
...@@ -96,7 +91,6 @@ ENTRY(__get_user_32t_8) ...@@ -96,7 +91,6 @@ ENTRY(__get_user_32t_8)
mov r0, #0 mov r0, #0
ret lr ret lr
ENDPROC(__get_user_32t_8) ENDPROC(__get_user_32t_8)
EXPORT_SYMBOL(__get_user_32t_8)
ENTRY(__get_user_64t_1) ENTRY(__get_user_64t_1)
check_uaccess r0, 1, r1, r2, __get_user_bad8 check_uaccess r0, 1, r1, r2, __get_user_bad8
...@@ -104,7 +98,6 @@ ENTRY(__get_user_64t_1) ...@@ -104,7 +98,6 @@ ENTRY(__get_user_64t_1)
mov r0, #0 mov r0, #0
ret lr ret lr
ENDPROC(__get_user_64t_1) ENDPROC(__get_user_64t_1)
EXPORT_SYMBOL(__get_user_64t_1)
ENTRY(__get_user_64t_2) ENTRY(__get_user_64t_2)
check_uaccess r0, 2, r1, r2, __get_user_bad8 check_uaccess r0, 2, r1, r2, __get_user_bad8
...@@ -121,7 +114,6 @@ rb .req r0 ...@@ -121,7 +114,6 @@ rb .req r0
mov r0, #0 mov r0, #0
ret lr ret lr
ENDPROC(__get_user_64t_2) ENDPROC(__get_user_64t_2)
EXPORT_SYMBOL(__get_user_64t_2)
ENTRY(__get_user_64t_4) ENTRY(__get_user_64t_4)
check_uaccess r0, 4, r1, r2, __get_user_bad8 check_uaccess r0, 4, r1, r2, __get_user_bad8
...@@ -129,7 +121,6 @@ ENTRY(__get_user_64t_4) ...@@ -129,7 +121,6 @@ ENTRY(__get_user_64t_4)
mov r0, #0 mov r0, #0
ret lr ret lr
ENDPROC(__get_user_64t_4) ENDPROC(__get_user_64t_4)
EXPORT_SYMBOL(__get_user_64t_4)
#endif #endif
__get_user_bad8: __get_user_bad8:
......
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
.Linsb_align: rsb ip, ip, #4 .Linsb_align: rsb ip, ip, #4
cmp ip, r2 cmp ip, r2
...@@ -122,4 +121,3 @@ ENTRY(__raw_readsb) ...@@ -122,4 +121,3 @@ ENTRY(__raw_readsb)
ldmfd sp!, {r4 - r6, pc} ldmfd sp!, {r4 - r6, pc}
ENDPROC(__raw_readsb) ENDPROC(__raw_readsb)
EXPORT_SYMBOL(__raw_readsb)
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
ENTRY(__raw_readsl) ENTRY(__raw_readsl)
teq r2, #0 @ do we have to check for the zero len? teq r2, #0 @ do we have to check for the zero len?
...@@ -78,4 +77,3 @@ ENTRY(__raw_readsl) ...@@ -78,4 +77,3 @@ ENTRY(__raw_readsl)
strb r3, [r1, #0] strb r3, [r1, #0]
ret lr ret lr
ENDPROC(__raw_readsl) ENDPROC(__raw_readsl)
EXPORT_SYMBOL(__raw_readsl)
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
.Linsw_bad_alignment: .Linsw_bad_alignment:
adr r0, .Linsw_bad_align_msg adr r0, .Linsw_bad_align_msg
...@@ -104,4 +103,4 @@ ENTRY(__raw_readsw) ...@@ -104,4 +103,4 @@ ENTRY(__raw_readsw)
ldmfd sp!, {r4, r5, r6, pc} ldmfd sp!, {r4, r5, r6, pc}
EXPORT_SYMBOL(__raw_readsw)
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
.macro pack, rd, hw1, hw2 .macro pack, rd, hw1, hw2
#ifndef __ARMEB__ #ifndef __ARMEB__
...@@ -130,4 +129,3 @@ ENTRY(__raw_readsw) ...@@ -130,4 +129,3 @@ ENTRY(__raw_readsw)
strneb ip, [r1] strneb ip, [r1]
ldmfd sp!, {r4, pc} ldmfd sp!, {r4, pc}
ENDPROC(__raw_readsw) ENDPROC(__raw_readsw)
EXPORT_SYMBOL(__raw_readsw)
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
.macro outword, rd .macro outword, rd
#ifndef __ARMEB__ #ifndef __ARMEB__
...@@ -93,4 +92,3 @@ ENTRY(__raw_writesb) ...@@ -93,4 +92,3 @@ ENTRY(__raw_writesb)
ldmfd sp!, {r4, r5, pc} ldmfd sp!, {r4, r5, pc}
ENDPROC(__raw_writesb) ENDPROC(__raw_writesb)
EXPORT_SYMBOL(__raw_writesb)
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
ENTRY(__raw_writesl) ENTRY(__raw_writesl)
teq r2, #0 @ do we have to check for the zero len? teq r2, #0 @ do we have to check for the zero len?
...@@ -66,4 +65,3 @@ ENTRY(__raw_writesl) ...@@ -66,4 +65,3 @@ ENTRY(__raw_writesl)
bne 6b bne 6b
ret lr ret lr
ENDPROC(__raw_writesl) ENDPROC(__raw_writesl)
EXPORT_SYMBOL(__raw_writesl)
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
.Loutsw_bad_alignment: .Loutsw_bad_alignment:
adr r0, .Loutsw_bad_align_msg adr r0, .Loutsw_bad_align_msg
...@@ -125,4 +124,3 @@ ENTRY(__raw_writesw) ...@@ -125,4 +124,3 @@ ENTRY(__raw_writesw)
strne ip, [r0] strne ip, [r0]
ldmfd sp!, {r4, r5, r6, pc} ldmfd sp!, {r4, r5, r6, pc}
EXPORT_SYMBOL(__raw_writesw)
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
.macro outword, rd .macro outword, rd
#ifndef __ARMEB__ #ifndef __ARMEB__
...@@ -99,4 +98,3 @@ ENTRY(__raw_writesw) ...@@ -99,4 +98,3 @@ ENTRY(__raw_writesw)
strneh ip, [r0] strneh ip, [r0]
ret lr ret lr
ENDPROC(__raw_writesw) ENDPROC(__raw_writesw)
EXPORT_SYMBOL(__raw_writesw)
...@@ -36,7 +36,6 @@ Boston, MA 02111-1307, USA. */ ...@@ -36,7 +36,6 @@ Boston, MA 02111-1307, USA. */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/unwind.h> #include <asm/unwind.h>
#include <asm/export.h>
.macro ARM_DIV_BODY dividend, divisor, result, curbit .macro ARM_DIV_BODY dividend, divisor, result, curbit
...@@ -239,8 +238,6 @@ UNWIND(.fnstart) ...@@ -239,8 +238,6 @@ UNWIND(.fnstart)
UNWIND(.fnend) UNWIND(.fnend)
ENDPROC(__udivsi3) ENDPROC(__udivsi3)
ENDPROC(__aeabi_uidiv) ENDPROC(__aeabi_uidiv)
EXPORT_SYMBOL(__udivsi3)
EXPORT_SYMBOL(__aeabi_uidiv)
ENTRY(__umodsi3) ENTRY(__umodsi3)
UNWIND(.fnstart) UNWIND(.fnstart)
...@@ -259,7 +256,6 @@ UNWIND(.fnstart) ...@@ -259,7 +256,6 @@ UNWIND(.fnstart)
UNWIND(.fnend) UNWIND(.fnend)
ENDPROC(__umodsi3) ENDPROC(__umodsi3)
EXPORT_SYMBOL(__umodsi3)
#ifdef CONFIG_ARM_PATCH_IDIV #ifdef CONFIG_ARM_PATCH_IDIV
.align 3 .align 3
...@@ -307,8 +303,6 @@ UNWIND(.fnstart) ...@@ -307,8 +303,6 @@ UNWIND(.fnstart)
UNWIND(.fnend) UNWIND(.fnend)
ENDPROC(__divsi3) ENDPROC(__divsi3)
ENDPROC(__aeabi_idiv) ENDPROC(__aeabi_idiv)
EXPORT_SYMBOL(__divsi3)
EXPORT_SYMBOL(__aeabi_idiv)
ENTRY(__modsi3) ENTRY(__modsi3)
UNWIND(.fnstart) UNWIND(.fnstart)
...@@ -333,7 +327,6 @@ UNWIND(.fnstart) ...@@ -333,7 +327,6 @@ UNWIND(.fnstart)
UNWIND(.fnend) UNWIND(.fnend)
ENDPROC(__modsi3) ENDPROC(__modsi3)
EXPORT_SYMBOL(__modsi3)
#ifdef CONFIG_AEABI #ifdef CONFIG_AEABI
...@@ -350,7 +343,6 @@ UNWIND(.save {r0, r1, ip, lr} ) ...@@ -350,7 +343,6 @@ UNWIND(.save {r0, r1, ip, lr} )
UNWIND(.fnend) UNWIND(.fnend)
ENDPROC(__aeabi_uidivmod) ENDPROC(__aeabi_uidivmod)
EXPORT_SYMBOL(__aeabi_uidivmod)
ENTRY(__aeabi_idivmod) ENTRY(__aeabi_idivmod)
UNWIND(.fnstart) UNWIND(.fnstart)
...@@ -364,7 +356,6 @@ UNWIND(.save {r0, r1, ip, lr} ) ...@@ -364,7 +356,6 @@ UNWIND(.save {r0, r1, ip, lr} )
UNWIND(.fnend) UNWIND(.fnend)
ENDPROC(__aeabi_idivmod) ENDPROC(__aeabi_idivmod)
EXPORT_SYMBOL(__aeabi_idivmod)
#endif #endif
......
...@@ -28,7 +28,6 @@ Boston, MA 02110-1301, USA. */ ...@@ -28,7 +28,6 @@ Boston, MA 02110-1301, USA. */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
#ifdef __ARMEB__ #ifdef __ARMEB__
#define al r1 #define al r1
...@@ -53,5 +52,3 @@ ENTRY(__aeabi_llsr) ...@@ -53,5 +52,3 @@ ENTRY(__aeabi_llsr)
ENDPROC(__lshrdi3) ENDPROC(__lshrdi3)
ENDPROC(__aeabi_llsr) ENDPROC(__aeabi_llsr)
EXPORT_SYMBOL(__lshrdi3)
EXPORT_SYMBOL(__aeabi_llsr)
...@@ -11,7 +11,6 @@ ...@@ -11,7 +11,6 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
.text .text
.align 5 .align 5
...@@ -25,4 +24,3 @@ ENTRY(memchr) ...@@ -25,4 +24,3 @@ ENTRY(memchr)
2: movne r0, #0 2: movne r0, #0
ret lr ret lr
ENDPROC(memchr) ENDPROC(memchr)
EXPORT_SYMBOL(memchr)
...@@ -13,7 +13,6 @@ ...@@ -13,7 +13,6 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/unwind.h> #include <asm/unwind.h>
#include <asm/export.h>
#define LDR1W_SHIFT 0 #define LDR1W_SHIFT 0
#define STR1W_SHIFT 0 #define STR1W_SHIFT 0
...@@ -69,5 +68,3 @@ ENTRY(memcpy) ...@@ -69,5 +68,3 @@ ENTRY(memcpy)
ENDPROC(memcpy) ENDPROC(memcpy)
ENDPROC(mmiocpy) ENDPROC(mmiocpy)
EXPORT_SYMBOL(memcpy)
EXPORT_SYMBOL(mmiocpy)
...@@ -13,7 +13,6 @@ ...@@ -13,7 +13,6 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/unwind.h> #include <asm/unwind.h>
#include <asm/export.h>
.text .text
...@@ -226,4 +225,3 @@ ENTRY(memmove) ...@@ -226,4 +225,3 @@ ENTRY(memmove)
18: backward_copy_shift push=24 pull=8 18: backward_copy_shift push=24 pull=8
ENDPROC(memmove) ENDPROC(memmove)
EXPORT_SYMBOL(memmove)
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/unwind.h> #include <asm/unwind.h>
#include <asm/export.h>
.text .text
.align 5 .align 5
...@@ -136,5 +135,3 @@ UNWIND( .fnstart ) ...@@ -136,5 +135,3 @@ UNWIND( .fnstart )
UNWIND( .fnend ) UNWIND( .fnend )
ENDPROC(memset) ENDPROC(memset)
ENDPROC(mmioset) ENDPROC(mmioset)
EXPORT_SYMBOL(memset)
EXPORT_SYMBOL(mmioset)
...@@ -10,7 +10,6 @@ ...@@ -10,7 +10,6 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/unwind.h> #include <asm/unwind.h>
#include <asm/export.h>
.text .text
.align 5 .align 5
...@@ -136,4 +135,3 @@ UNWIND( .fnstart ) ...@@ -136,4 +135,3 @@ UNWIND( .fnstart )
ret lr @ 1 ret lr @ 1
UNWIND( .fnend ) UNWIND( .fnend )
ENDPROC(__memzero) ENDPROC(__memzero)
EXPORT_SYMBOL(__memzero)
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
#ifdef __ARMEB__ #ifdef __ARMEB__
#define xh r0 #define xh r0
...@@ -47,5 +46,3 @@ ENTRY(__aeabi_lmul) ...@@ -47,5 +46,3 @@ ENTRY(__aeabi_lmul)
ENDPROC(__muldi3) ENDPROC(__muldi3)
ENDPROC(__aeabi_lmul) ENDPROC(__aeabi_lmul)
EXPORT_SYMBOL(__muldi3)
EXPORT_SYMBOL(__aeabi_lmul)
...@@ -31,7 +31,6 @@ ...@@ -31,7 +31,6 @@
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/errno.h> #include <asm/errno.h>
#include <asm/domain.h> #include <asm/domain.h>
#include <asm/export.h>
ENTRY(__put_user_1) ENTRY(__put_user_1)
check_uaccess r0, 1, r1, ip, __put_user_bad check_uaccess r0, 1, r1, ip, __put_user_bad
...@@ -39,7 +38,6 @@ ENTRY(__put_user_1) ...@@ -39,7 +38,6 @@ ENTRY(__put_user_1)
mov r0, #0 mov r0, #0
ret lr ret lr
ENDPROC(__put_user_1) ENDPROC(__put_user_1)
EXPORT_SYMBOL(__put_user_1)
ENTRY(__put_user_2) ENTRY(__put_user_2)
check_uaccess r0, 2, r1, ip, __put_user_bad check_uaccess r0, 2, r1, ip, __put_user_bad
...@@ -64,7 +62,6 @@ ENTRY(__put_user_2) ...@@ -64,7 +62,6 @@ ENTRY(__put_user_2)
mov r0, #0 mov r0, #0
ret lr ret lr
ENDPROC(__put_user_2) ENDPROC(__put_user_2)
EXPORT_SYMBOL(__put_user_2)
ENTRY(__put_user_4) ENTRY(__put_user_4)
check_uaccess r0, 4, r1, ip, __put_user_bad check_uaccess r0, 4, r1, ip, __put_user_bad
...@@ -72,7 +69,6 @@ ENTRY(__put_user_4) ...@@ -72,7 +69,6 @@ ENTRY(__put_user_4)
mov r0, #0 mov r0, #0
ret lr ret lr
ENDPROC(__put_user_4) ENDPROC(__put_user_4)
EXPORT_SYMBOL(__put_user_4)
ENTRY(__put_user_8) ENTRY(__put_user_8)
check_uaccess r0, 8, r1, ip, __put_user_bad check_uaccess r0, 8, r1, ip, __put_user_bad
...@@ -86,7 +82,6 @@ ENTRY(__put_user_8) ...@@ -86,7 +82,6 @@ ENTRY(__put_user_8)
mov r0, #0 mov r0, #0
ret lr ret lr
ENDPROC(__put_user_8) ENDPROC(__put_user_8)
EXPORT_SYMBOL(__put_user_8)
__put_user_bad: __put_user_bad:
mov r0, #-EFAULT mov r0, #-EFAULT
......
...@@ -11,7 +11,6 @@ ...@@ -11,7 +11,6 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
.text .text
.align 5 .align 5
...@@ -26,4 +25,3 @@ ENTRY(strchr) ...@@ -26,4 +25,3 @@ ENTRY(strchr)
subeq r0, r0, #1 subeq r0, r0, #1
ret lr ret lr
ENDPROC(strchr) ENDPROC(strchr)
EXPORT_SYMBOL(strchr)
...@@ -11,7 +11,6 @@ ...@@ -11,7 +11,6 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
.text .text
.align 5 .align 5
...@@ -25,4 +24,3 @@ ENTRY(strrchr) ...@@ -25,4 +24,3 @@ ENTRY(strrchr)
mov r0, r3 mov r0, r3
ret lr ret lr
ENDPROC(strrchr) ENDPROC(strrchr)
EXPORT_SYMBOL(strrchr)
...@@ -19,7 +19,6 @@ ...@@ -19,7 +19,6 @@
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/highmem.h> #include <linux/highmem.h>
#include <linux/hugetlb.h> #include <linux/hugetlb.h>
#include <linux/export.h>
#include <asm/current.h> #include <asm/current.h>
#include <asm/page.h> #include <asm/page.h>
...@@ -157,7 +156,6 @@ arm_copy_to_user(void __user *to, const void *from, unsigned long n) ...@@ -157,7 +156,6 @@ arm_copy_to_user(void __user *to, const void *from, unsigned long n)
} }
return n; return n;
} }
EXPORT_SYMBOL(arm_copy_to_user);
static unsigned long noinline static unsigned long noinline
__clear_user_memset(void __user *addr, unsigned long n) __clear_user_memset(void __user *addr, unsigned long n)
...@@ -215,7 +213,6 @@ unsigned long arm_clear_user(void __user *addr, unsigned long n) ...@@ -215,7 +213,6 @@ unsigned long arm_clear_user(void __user *addr, unsigned long n)
} }
return n; return n;
} }
EXPORT_SYMBOL(arm_clear_user);
#if 0 #if 0
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
#ifdef __ARMEB__ #ifdef __ARMEB__
#define xh r0 #define xh r0
...@@ -36,7 +35,6 @@ ENTRY(__ucmpdi2) ...@@ -36,7 +35,6 @@ ENTRY(__ucmpdi2)
ret lr ret lr
ENDPROC(__ucmpdi2) ENDPROC(__ucmpdi2)
EXPORT_SYMBOL(__ucmpdi2)
#ifdef CONFIG_AEABI #ifdef CONFIG_AEABI
...@@ -50,7 +48,6 @@ ENTRY(__aeabi_ulcmp) ...@@ -50,7 +48,6 @@ ENTRY(__aeabi_ulcmp)
ret lr ret lr
ENDPROC(__aeabi_ulcmp) ENDPROC(__aeabi_ulcmp)
EXPORT_SYMBOL(__aeabi_ulcmp)
#endif #endif
...@@ -32,6 +32,7 @@ endif ...@@ -32,6 +32,7 @@ endif
ifdef CONFIG_SND_IMX_SOC ifdef CONFIG_SND_IMX_SOC
obj-y += ssi-fiq.o obj-y += ssi-fiq.o
obj-y += ssi-fiq-ksym.o
endif endif
# i.MX21 based machines # i.MX21 based machines
......
/*
* Exported ksyms for the SSI FIQ handler
*
* Copyright (C) 2009, Sascha Hauer <s.hauer@pengutronix.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/module.h>
#include <linux/platform_data/asoc-imx-ssi.h>
EXPORT_SYMBOL(imx_ssi_fiq_tx_buffer);
EXPORT_SYMBOL(imx_ssi_fiq_rx_buffer);
EXPORT_SYMBOL(imx_ssi_fiq_start);
EXPORT_SYMBOL(imx_ssi_fiq_end);
EXPORT_SYMBOL(imx_ssi_fiq_base);
...@@ -8,7 +8,6 @@ ...@@ -8,7 +8,6 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/export.h>
/* /*
* r8 = bit 0-15: tx offset, bit 16-31: tx buffer size * r8 = bit 0-15: tx offset, bit 16-31: tx buffer size
...@@ -145,8 +144,4 @@ imx_ssi_fiq_tx_buffer: ...@@ -145,8 +144,4 @@ imx_ssi_fiq_tx_buffer:
.word 0x0 .word 0x0
.L_imx_ssi_fiq_end: .L_imx_ssi_fiq_end:
imx_ssi_fiq_end: imx_ssi_fiq_end:
EXPORT_SYMBOL(imx_ssi_fiq_tx_buffer)
EXPORT_SYMBOL(imx_ssi_fiq_rx_buffer)
EXPORT_SYMBOL(imx_ssi_fiq_start)
EXPORT_SYMBOL(imx_ssi_fiq_end)
EXPORT_SYMBOL(imx_ssi_fiq_base)
...@@ -34,7 +34,9 @@ config PARISC ...@@ -34,7 +34,9 @@ config PARISC
select HAVE_ARCH_HASH select HAVE_ARCH_HASH
select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_UNSTABLE_SCHED_CLOCK if (SMP || !64BIT) select GENERIC_SCHED_CLOCK
select HAVE_UNSTABLE_SCHED_CLOCK if SMP
select GENERIC_CLOCKEVENTS
select ARCH_NO_COHERENT_DMA_MMAP select ARCH_NO_COHERENT_DMA_MMAP
select CPU_NO_EFFICIENT_FFS select CPU_NO_EFFICIENT_FFS
......
...@@ -369,6 +369,7 @@ void __init parisc_setup_cache_timing(void) ...@@ -369,6 +369,7 @@ void __init parisc_setup_cache_timing(void)
{ {
unsigned long rangetime, alltime; unsigned long rangetime, alltime;
unsigned long size, start; unsigned long size, start;
unsigned long threshold;
alltime = mfctl(16); alltime = mfctl(16);
flush_data_cache(); flush_data_cache();
...@@ -382,17 +383,12 @@ void __init parisc_setup_cache_timing(void) ...@@ -382,17 +383,12 @@ void __init parisc_setup_cache_timing(void)
printk(KERN_DEBUG "Whole cache flush %lu cycles, flushing %lu bytes %lu cycles\n", printk(KERN_DEBUG "Whole cache flush %lu cycles, flushing %lu bytes %lu cycles\n",
alltime, size, rangetime); alltime, size, rangetime);
/* Racy, but if we see an intermediate value, it's ok too... */ threshold = L1_CACHE_ALIGN(size * alltime / rangetime);
parisc_cache_flush_threshold = size * alltime / rangetime; if (threshold > cache_info.dc_size)
threshold = cache_info.dc_size;
parisc_cache_flush_threshold = L1_CACHE_ALIGN(parisc_cache_flush_threshold); if (threshold)
if (!parisc_cache_flush_threshold) parisc_cache_flush_threshold = threshold;
parisc_cache_flush_threshold = FLUSH_THRESHOLD; printk(KERN_INFO "Cache flush threshold set to %lu KiB\n",
if (parisc_cache_flush_threshold > cache_info.dc_size)
parisc_cache_flush_threshold = cache_info.dc_size;
printk(KERN_INFO "Setting cache flush threshold to %lu kB\n",
parisc_cache_flush_threshold/1024); parisc_cache_flush_threshold/1024);
/* calculate TLB flush threshold */ /* calculate TLB flush threshold */
...@@ -401,7 +397,7 @@ void __init parisc_setup_cache_timing(void) ...@@ -401,7 +397,7 @@ void __init parisc_setup_cache_timing(void)
flush_tlb_all(); flush_tlb_all();
alltime = mfctl(16) - alltime; alltime = mfctl(16) - alltime;
size = PAGE_SIZE; size = 0;
start = (unsigned long) _text; start = (unsigned long) _text;
rangetime = mfctl(16); rangetime = mfctl(16);
while (start < (unsigned long) _end) { while (start < (unsigned long) _end) {
...@@ -414,13 +410,10 @@ void __init parisc_setup_cache_timing(void) ...@@ -414,13 +410,10 @@ void __init parisc_setup_cache_timing(void)
printk(KERN_DEBUG "Whole TLB flush %lu cycles, flushing %lu bytes %lu cycles\n", printk(KERN_DEBUG "Whole TLB flush %lu cycles, flushing %lu bytes %lu cycles\n",
alltime, size, rangetime); alltime, size, rangetime);
parisc_tlb_flush_threshold = size * alltime / rangetime; threshold = PAGE_ALIGN(num_online_cpus() * size * alltime / rangetime);
parisc_tlb_flush_threshold *= num_online_cpus(); if (threshold)
parisc_tlb_flush_threshold = PAGE_ALIGN(parisc_tlb_flush_threshold); parisc_tlb_flush_threshold = threshold;
if (!parisc_tlb_flush_threshold) printk(KERN_INFO "TLB flush threshold set to %lu KiB\n",
parisc_tlb_flush_threshold = FLUSH_TLB_THRESHOLD;
printk(KERN_INFO "Setting TLB flush threshold to %lu kB\n",
parisc_tlb_flush_threshold/1024); parisc_tlb_flush_threshold/1024);
} }
......
...@@ -58,7 +58,7 @@ void __init setup_pdc(void) ...@@ -58,7 +58,7 @@ void __init setup_pdc(void)
status = pdc_system_map_find_mods(&module_result, &module_path, 0); status = pdc_system_map_find_mods(&module_result, &module_path, 0);
if (status == PDC_OK) { if (status == PDC_OK) {
pdc_type = PDC_TYPE_SYSTEM_MAP; pdc_type = PDC_TYPE_SYSTEM_MAP;
printk("System Map.\n"); pr_cont("System Map.\n");
return; return;
} }
...@@ -77,7 +77,7 @@ void __init setup_pdc(void) ...@@ -77,7 +77,7 @@ void __init setup_pdc(void)
status = pdc_pat_cell_get_number(&cell_info); status = pdc_pat_cell_get_number(&cell_info);
if (status == PDC_OK) { if (status == PDC_OK) {
pdc_type = PDC_TYPE_PAT; pdc_type = PDC_TYPE_PAT;
printk("64 bit PAT.\n"); pr_cont("64 bit PAT.\n");
return; return;
} }
#endif #endif
...@@ -97,12 +97,12 @@ void __init setup_pdc(void) ...@@ -97,12 +97,12 @@ void __init setup_pdc(void)
case 0xC: /* 715/64, at least */ case 0xC: /* 715/64, at least */
pdc_type = PDC_TYPE_SNAKE; pdc_type = PDC_TYPE_SNAKE;
printk("Snake.\n"); pr_cont("Snake.\n");
return; return;
default: /* Everything else */ default: /* Everything else */
printk("Unsupported.\n"); pr_cont("Unsupported.\n");
panic("If this is a 64-bit machine, please try a 64-bit kernel.\n"); panic("If this is a 64-bit machine, please try a 64-bit kernel.\n");
} }
} }
......
...@@ -96,7 +96,7 @@ fitmanyloop: /* Loop if LOOP >= 2 */ ...@@ -96,7 +96,7 @@ fitmanyloop: /* Loop if LOOP >= 2 */
fitmanymiddle: /* Loop if LOOP >= 2 */ fitmanymiddle: /* Loop if LOOP >= 2 */
addib,COND(>) -1, %r31, fitmanymiddle /* Adjusted inner loop decr */ addib,COND(>) -1, %r31, fitmanymiddle /* Adjusted inner loop decr */
pitlbe 0(%sr1, %r28) pitlbe %r0(%sr1, %r28)
pitlbe,m %arg1(%sr1, %r28) /* Last pitlbe and addr adjust */ pitlbe,m %arg1(%sr1, %r28) /* Last pitlbe and addr adjust */
addib,COND(>) -1, %r29, fitmanymiddle /* Middle loop decr */ addib,COND(>) -1, %r29, fitmanymiddle /* Middle loop decr */
copy %arg3, %r31 /* Re-init inner loop count */ copy %arg3, %r31 /* Re-init inner loop count */
...@@ -139,7 +139,7 @@ fdtmanyloop: /* Loop if LOOP >= 2 */ ...@@ -139,7 +139,7 @@ fdtmanyloop: /* Loop if LOOP >= 2 */
fdtmanymiddle: /* Loop if LOOP >= 2 */ fdtmanymiddle: /* Loop if LOOP >= 2 */
addib,COND(>) -1, %r31, fdtmanymiddle /* Adjusted inner loop decr */ addib,COND(>) -1, %r31, fdtmanymiddle /* Adjusted inner loop decr */
pdtlbe 0(%sr1, %r28) pdtlbe %r0(%sr1, %r28)
pdtlbe,m %arg1(%sr1, %r28) /* Last pdtlbe and addr adjust */ pdtlbe,m %arg1(%sr1, %r28) /* Last pdtlbe and addr adjust */
addib,COND(>) -1, %r29, fdtmanymiddle /* Middle loop decr */ addib,COND(>) -1, %r29, fdtmanymiddle /* Middle loop decr */
copy %arg3, %r31 /* Re-init inner loop count */ copy %arg3, %r31 /* Re-init inner loop count */
...@@ -626,12 +626,12 @@ ENTRY_CFI(copy_user_page_asm) ...@@ -626,12 +626,12 @@ ENTRY_CFI(copy_user_page_asm)
/* Purge any old translations */ /* Purge any old translations */
#ifdef CONFIG_PA20 #ifdef CONFIG_PA20
pdtlb,l 0(%r28) pdtlb,l %r0(%r28)
pdtlb,l 0(%r29) pdtlb,l %r0(%r29)
#else #else
tlb_lock %r20,%r21,%r22 tlb_lock %r20,%r21,%r22
pdtlb 0(%r28) pdtlb %r0(%r28)
pdtlb 0(%r29) pdtlb %r0(%r29)
tlb_unlock %r20,%r21,%r22 tlb_unlock %r20,%r21,%r22
#endif #endif
...@@ -774,10 +774,10 @@ ENTRY_CFI(clear_user_page_asm) ...@@ -774,10 +774,10 @@ ENTRY_CFI(clear_user_page_asm)
/* Purge any old translation */ /* Purge any old translation */
#ifdef CONFIG_PA20 #ifdef CONFIG_PA20
pdtlb,l 0(%r28) pdtlb,l %r0(%r28)
#else #else
tlb_lock %r20,%r21,%r22 tlb_lock %r20,%r21,%r22
pdtlb 0(%r28) pdtlb %r0(%r28)
tlb_unlock %r20,%r21,%r22 tlb_unlock %r20,%r21,%r22
#endif #endif
...@@ -858,10 +858,10 @@ ENTRY_CFI(flush_dcache_page_asm) ...@@ -858,10 +858,10 @@ ENTRY_CFI(flush_dcache_page_asm)
/* Purge any old translation */ /* Purge any old translation */
#ifdef CONFIG_PA20 #ifdef CONFIG_PA20
pdtlb,l 0(%r28) pdtlb,l %r0(%r28)
#else #else
tlb_lock %r20,%r21,%r22 tlb_lock %r20,%r21,%r22
pdtlb 0(%r28) pdtlb %r0(%r28)
tlb_unlock %r20,%r21,%r22 tlb_unlock %r20,%r21,%r22
#endif #endif
...@@ -898,10 +898,10 @@ ENTRY_CFI(flush_dcache_page_asm) ...@@ -898,10 +898,10 @@ ENTRY_CFI(flush_dcache_page_asm)
sync sync
#ifdef CONFIG_PA20 #ifdef CONFIG_PA20
pdtlb,l 0(%r25) pdtlb,l %r0(%r25)
#else #else
tlb_lock %r20,%r21,%r22 tlb_lock %r20,%r21,%r22
pdtlb 0(%r25) pdtlb %r0(%r25)
tlb_unlock %r20,%r21,%r22 tlb_unlock %r20,%r21,%r22
#endif #endif
...@@ -931,13 +931,18 @@ ENTRY_CFI(flush_icache_page_asm) ...@@ -931,13 +931,18 @@ ENTRY_CFI(flush_icache_page_asm)
depwi 0, 31,PAGE_SHIFT, %r28 /* Clear any offset bits */ depwi 0, 31,PAGE_SHIFT, %r28 /* Clear any offset bits */
#endif #endif
/* Purge any old translation */ /* Purge any old translation. Note that the FIC instruction
* may use either the instruction or data TLB. Given that we
* have a flat address space, it's not clear which TLB will be
* used. So, we purge both entries. */
#ifdef CONFIG_PA20 #ifdef CONFIG_PA20
pdtlb,l %r0(%r28)
pitlb,l %r0(%sr4,%r28) pitlb,l %r0(%sr4,%r28)
#else #else
tlb_lock %r20,%r21,%r22 tlb_lock %r20,%r21,%r22
pitlb (%sr4,%r28) pdtlb %r0(%r28)
pitlb %r0(%sr4,%r28)
tlb_unlock %r20,%r21,%r22 tlb_unlock %r20,%r21,%r22
#endif #endif
...@@ -976,10 +981,12 @@ ENTRY_CFI(flush_icache_page_asm) ...@@ -976,10 +981,12 @@ ENTRY_CFI(flush_icache_page_asm)
sync sync
#ifdef CONFIG_PA20 #ifdef CONFIG_PA20
pdtlb,l %r0(%r28)
pitlb,l %r0(%sr4,%r25) pitlb,l %r0(%sr4,%r25)
#else #else
tlb_lock %r20,%r21,%r22 tlb_lock %r20,%r21,%r22
pitlb (%sr4,%r25) pdtlb %r0(%r28)
pitlb %r0(%sr4,%r25)
tlb_unlock %r20,%r21,%r22 tlb_unlock %r20,%r21,%r22
#endif #endif
......
...@@ -95,8 +95,8 @@ static inline int map_pte_uncached(pte_t * pte, ...@@ -95,8 +95,8 @@ static inline int map_pte_uncached(pte_t * pte,
if (!pte_none(*pte)) if (!pte_none(*pte))
printk(KERN_ERR "map_pte_uncached: page already exists\n"); printk(KERN_ERR "map_pte_uncached: page already exists\n");
set_pte(pte, __mk_pte(*paddr_ptr, PAGE_KERNEL_UNC));
purge_tlb_start(flags); purge_tlb_start(flags);
set_pte(pte, __mk_pte(*paddr_ptr, PAGE_KERNEL_UNC));
pdtlb_kernel(orig_vaddr); pdtlb_kernel(orig_vaddr);
purge_tlb_end(flags); purge_tlb_end(flags);
vaddr += PAGE_SIZE; vaddr += PAGE_SIZE;
......
...@@ -334,6 +334,10 @@ static int __init parisc_init(void) ...@@ -334,6 +334,10 @@ static int __init parisc_init(void)
/* tell PDC we're Linux. Nevermind failure. */ /* tell PDC we're Linux. Nevermind failure. */
pdc_stable_write(0x40, &osid, sizeof(osid)); pdc_stable_write(0x40, &osid, sizeof(osid));
/* start with known state */
flush_cache_all_local();
flush_tlb_all_local(NULL);
processor_init(); processor_init();
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
pr_info("CPU(s): %d out of %d %s at %d.%06d MHz online\n", pr_info("CPU(s): %d out of %d %s at %d.%06d MHz online\n",
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/rtc.h> #include <linux/rtc.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/sched_clock.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/param.h> #include <linux/param.h>
#include <linux/string.h> #include <linux/string.h>
...@@ -39,18 +40,6 @@ ...@@ -39,18 +40,6 @@
static unsigned long clocktick __read_mostly; /* timer cycles per tick */ static unsigned long clocktick __read_mostly; /* timer cycles per tick */
#ifndef CONFIG_64BIT
/*
* The processor-internal cycle counter (Control Register 16) is used as time
* source for the sched_clock() function. This register is 64bit wide on a
* 64-bit kernel and 32bit on a 32-bit kernel. Since sched_clock() always
* requires a 64bit counter we emulate on the 32-bit kernel the higher 32bits
* with a per-cpu variable which we increase every time the counter
* wraps-around (which happens every ~4 secounds).
*/
static DEFINE_PER_CPU(unsigned long, cr16_high_32_bits);
#endif
/* /*
* We keep time on PA-RISC Linux by using the Interval Timer which is * We keep time on PA-RISC Linux by using the Interval Timer which is
* a pair of registers; one is read-only and one is write-only; both * a pair of registers; one is read-only and one is write-only; both
...@@ -121,12 +110,6 @@ irqreturn_t __irq_entry timer_interrupt(int irq, void *dev_id) ...@@ -121,12 +110,6 @@ irqreturn_t __irq_entry timer_interrupt(int irq, void *dev_id)
*/ */
mtctl(next_tick, 16); mtctl(next_tick, 16);
#if !defined(CONFIG_64BIT)
/* check for overflow on a 32bit kernel (every ~4 seconds). */
if (unlikely(next_tick < now))
this_cpu_inc(cr16_high_32_bits);
#endif
/* Skip one clocktick on purpose if we missed next_tick. /* Skip one clocktick on purpose if we missed next_tick.
* The new CR16 must be "later" than current CR16 otherwise * The new CR16 must be "later" than current CR16 otherwise
* itimer would not fire until CR16 wrapped - e.g 4 seconds * itimer would not fire until CR16 wrapped - e.g 4 seconds
...@@ -208,7 +191,7 @@ EXPORT_SYMBOL(profile_pc); ...@@ -208,7 +191,7 @@ EXPORT_SYMBOL(profile_pc);
/* clock source code */ /* clock source code */
static cycle_t read_cr16(struct clocksource *cs) static cycle_t notrace read_cr16(struct clocksource *cs)
{ {
return get_cycles(); return get_cycles();
} }
...@@ -287,26 +270,9 @@ void read_persistent_clock(struct timespec *ts) ...@@ -287,26 +270,9 @@ void read_persistent_clock(struct timespec *ts)
} }
/* static u64 notrace read_cr16_sched_clock(void)
* sched_clock() framework
*/
static u32 cyc2ns_mul __read_mostly;
static u32 cyc2ns_shift __read_mostly;
u64 sched_clock(void)
{ {
u64 now; return get_cycles();
/* Get current cycle counter (Control Register 16). */
#ifdef CONFIG_64BIT
now = mfctl(16);
#else
now = mfctl(16) + (((u64) this_cpu_read(cr16_high_32_bits)) << 32);
#endif
/* return the value in ns (cycles_2_ns) */
return mul_u64_u32_shr(now, cyc2ns_mul, cyc2ns_shift);
} }
...@@ -316,17 +282,16 @@ u64 sched_clock(void) ...@@ -316,17 +282,16 @@ u64 sched_clock(void)
void __init time_init(void) void __init time_init(void)
{ {
unsigned long current_cr16_khz; unsigned long cr16_hz;
current_cr16_khz = PAGE0->mem_10msec/10; /* kHz */
clocktick = (100 * PAGE0->mem_10msec) / HZ; clocktick = (100 * PAGE0->mem_10msec) / HZ;
/* calculate mult/shift values for cr16 */
clocks_calc_mult_shift(&cyc2ns_mul, &cyc2ns_shift, current_cr16_khz,
NSEC_PER_MSEC, 0);
start_cpu_itimer(); /* get CPU 0 started */ start_cpu_itimer(); /* get CPU 0 started */
cr16_hz = 100 * PAGE0->mem_10msec; /* Hz */
/* register at clocksource framework */ /* register at clocksource framework */
clocksource_register_khz(&clocksource_cr16, current_cr16_khz); clocksource_register_hz(&clocksource_cr16, cr16_hz);
/* register as sched_clock source */
sched_clock_register(read_cr16_sched_clock, BITS_PER_LONG, cr16_hz);
} }
...@@ -232,8 +232,12 @@ void start(void) ...@@ -232,8 +232,12 @@ void start(void)
console_ops.close(); console_ops.close();
kentry = (kernel_entry_t) vmlinux.addr; kentry = (kernel_entry_t) vmlinux.addr;
if (ft_addr) if (ft_addr) {
kentry(ft_addr, 0, NULL); if(platform_ops.kentry)
platform_ops.kentry(ft_addr, vmlinux.addr);
else
kentry(ft_addr, 0, NULL);
}
else else
kentry((unsigned long)initrd.addr, initrd.size, kentry((unsigned long)initrd.addr, initrd.size,
loader_info.promptr); loader_info.promptr);
......
...@@ -12,6 +12,19 @@ ...@@ -12,6 +12,19 @@
.text .text
.globl opal_kentry
opal_kentry:
/* r3 is the fdt ptr */
mtctr r4
li r4, 0
li r5, 0
li r6, 0
li r7, 0
ld r11,opal@got(r2)
ld r8,0(r11)
ld r9,8(r11)
bctr
#define OPAL_CALL(name, token) \ #define OPAL_CALL(name, token) \
.globl name; \ .globl name; \
name: \ name: \
......
...@@ -23,14 +23,25 @@ struct opal { ...@@ -23,14 +23,25 @@ struct opal {
static u32 opal_con_id; static u32 opal_con_id;
/* see opal-wrappers.S */
int64_t opal_console_write(int64_t term_number, u64 *length, const u8 *buffer); int64_t opal_console_write(int64_t term_number, u64 *length, const u8 *buffer);
int64_t opal_console_read(int64_t term_number, uint64_t *length, u8 *buffer); int64_t opal_console_read(int64_t term_number, uint64_t *length, u8 *buffer);
int64_t opal_console_write_buffer_space(uint64_t term_number, uint64_t *length); int64_t opal_console_write_buffer_space(uint64_t term_number, uint64_t *length);
int64_t opal_console_flush(uint64_t term_number); int64_t opal_console_flush(uint64_t term_number);
int64_t opal_poll_events(uint64_t *outstanding_event_mask); int64_t opal_poll_events(uint64_t *outstanding_event_mask);
void opal_kentry(unsigned long fdt_addr, void *vmlinux_addr);
static int opal_con_open(void) static int opal_con_open(void)
{ {
/*
* When OPAL loads the boot kernel it stashes the OPAL base and entry
* address in r8 and r9 so the kernel can use the OPAL console
* before unflattening the devicetree. While executing the wrapper will
* probably trash r8 and r9 so this kentry hook restores them before
* entering the decompressed kernel.
*/
platform_ops.kentry = opal_kentry;
return 0; return 0;
} }
......
...@@ -30,6 +30,7 @@ struct platform_ops { ...@@ -30,6 +30,7 @@ struct platform_ops {
void * (*realloc)(void *ptr, unsigned long size); void * (*realloc)(void *ptr, unsigned long size);
void (*exit)(void); void (*exit)(void);
void * (*vmlinux_alloc)(unsigned long size); void * (*vmlinux_alloc)(unsigned long size);
void (*kentry)(unsigned long fdt_addr, void *vmlinux_addr);
}; };
extern struct platform_ops platform_ops; extern struct platform_ops platform_ops;
......
...@@ -14,6 +14,10 @@ ...@@ -14,6 +14,10 @@
#include <linux/threads.h> #include <linux/threads.h>
#include <linux/kprobes.h> #include <linux/kprobes.h>
#include <asm/cacheflush.h>
#include <asm/checksum.h>
#include <asm/uaccess.h>
#include <asm/epapr_hcalls.h>
#include <uapi/asm/ucontext.h> #include <uapi/asm/ucontext.h>
...@@ -109,4 +113,12 @@ void early_setup_secondary(void); ...@@ -109,4 +113,12 @@ void early_setup_secondary(void);
/* time */ /* time */
void accumulate_stolen_time(void); void accumulate_stolen_time(void);
/* misc runtime */
extern u64 __bswapdi2(u64);
extern s64 __lshrdi3(s64, int);
extern s64 __ashldi3(s64, int);
extern s64 __ashrdi3(s64, int);
extern int __cmpdi2(s64, s64);
extern int __ucmpdi2(u64, u64);
#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */ #endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
...@@ -28,6 +28,12 @@ ...@@ -28,6 +28,12 @@
* Individual features below. * Individual features below.
*/ */
/*
* Kernel read only support.
* We added the ppp value 0b110 in ISA 2.04.
*/
#define MMU_FTR_KERNEL_RO ASM_CONST(0x00004000)
/* /*
* We need to clear top 16bits of va (from the remaining 64 bits )in * We need to clear top 16bits of va (from the remaining 64 bits )in
* tlbie* instructions * tlbie* instructions
...@@ -103,10 +109,10 @@ ...@@ -103,10 +109,10 @@
#define MMU_FTRS_POWER4 MMU_FTRS_DEFAULT_HPTE_ARCH_V2 #define MMU_FTRS_POWER4 MMU_FTRS_DEFAULT_HPTE_ARCH_V2
#define MMU_FTRS_PPC970 MMU_FTRS_POWER4 | MMU_FTR_TLBIE_CROP_VA #define MMU_FTRS_PPC970 MMU_FTRS_POWER4 | MMU_FTR_TLBIE_CROP_VA
#define MMU_FTRS_POWER5 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE #define MMU_FTRS_POWER5 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE
#define MMU_FTRS_POWER6 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE #define MMU_FTRS_POWER6 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_KERNEL_RO
#define MMU_FTRS_POWER7 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE #define MMU_FTRS_POWER7 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_KERNEL_RO
#define MMU_FTRS_POWER8 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE #define MMU_FTRS_POWER8 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_KERNEL_RO
#define MMU_FTRS_POWER9 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE #define MMU_FTRS_POWER9 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_KERNEL_RO
#define MMU_FTRS_CELL MMU_FTRS_DEFAULT_HPTE_ARCH_V2 | \ #define MMU_FTRS_CELL MMU_FTRS_DEFAULT_HPTE_ARCH_V2 | \
MMU_FTR_CI_LARGE_PAGE MMU_FTR_CI_LARGE_PAGE
#define MMU_FTRS_PA6T MMU_FTRS_DEFAULT_HPTE_ARCH_V2 | \ #define MMU_FTRS_PA6T MMU_FTRS_DEFAULT_HPTE_ARCH_V2 | \
......
...@@ -355,6 +355,7 @@ ...@@ -355,6 +355,7 @@
#define LPCR_PECE0 ASM_CONST(0x0000000000004000) /* ext. exceptions can cause exit */ #define LPCR_PECE0 ASM_CONST(0x0000000000004000) /* ext. exceptions can cause exit */
#define LPCR_PECE1 ASM_CONST(0x0000000000002000) /* decrementer can cause exit */ #define LPCR_PECE1 ASM_CONST(0x0000000000002000) /* decrementer can cause exit */
#define LPCR_PECE2 ASM_CONST(0x0000000000001000) /* machine check etc can cause exit */ #define LPCR_PECE2 ASM_CONST(0x0000000000001000) /* machine check etc can cause exit */
#define LPCR_PECE_HVEE ASM_CONST(0x0000400000000000) /* P9 Wakeup on HV interrupts */
#define LPCR_MER ASM_CONST(0x0000000000000800) /* Mediated External Exception */ #define LPCR_MER ASM_CONST(0x0000000000000800) /* Mediated External Exception */
#define LPCR_MER_SH 11 #define LPCR_MER_SH 11
#define LPCR_TC ASM_CONST(0x0000000000000200) /* Translation control */ #define LPCR_TC ASM_CONST(0x0000000000000200) /* Translation control */
......
...@@ -98,8 +98,8 @@ _GLOBAL(__setup_cpu_power9) ...@@ -98,8 +98,8 @@ _GLOBAL(__setup_cpu_power9)
li r0,0 li r0,0
mtspr SPRN_LPID,r0 mtspr SPRN_LPID,r0
mfspr r3,SPRN_LPCR mfspr r3,SPRN_LPCR
ori r3, r3, LPCR_PECEDH LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE)
ori r3, r3, LPCR_HVICE or r3, r3, r4
bl __init_LPCR bl __init_LPCR
bl __init_HFSCR bl __init_HFSCR
bl __init_tlb_power9 bl __init_tlb_power9
...@@ -118,8 +118,8 @@ _GLOBAL(__restore_cpu_power9) ...@@ -118,8 +118,8 @@ _GLOBAL(__restore_cpu_power9)
li r0,0 li r0,0
mtspr SPRN_LPID,r0 mtspr SPRN_LPID,r0
mfspr r3,SPRN_LPCR mfspr r3,SPRN_LPCR
ori r3, r3, LPCR_PECEDH LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE)
ori r3, r3, LPCR_HVICE or r3, r3, r4
bl __init_LPCR bl __init_LPCR
bl __init_HFSCR bl __init_HFSCR
bl __init_tlb_power9 bl __init_tlb_power9
......
...@@ -193,8 +193,12 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags) ...@@ -193,8 +193,12 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags)
/* /*
* Kernel read only mapped with ppp bits 0b110 * Kernel read only mapped with ppp bits 0b110
*/ */
if (!(pteflags & _PAGE_WRITE)) if (!(pteflags & _PAGE_WRITE)) {
rflags |= (HPTE_R_PP0 | 0x2); if (mmu_has_feature(MMU_FTR_KERNEL_RO))
rflags |= (HPTE_R_PP0 | 0x2);
else
rflags |= 0x3;
}
} else { } else {
if (pteflags & _PAGE_RWX) if (pteflags & _PAGE_RWX)
rflags |= 0x2; rflags |= 0x2;
......
...@@ -218,8 +218,8 @@ void do_timer_interrupt(struct pt_regs *regs, int fault_num) ...@@ -218,8 +218,8 @@ void do_timer_interrupt(struct pt_regs *regs, int fault_num)
*/ */
unsigned long long sched_clock(void) unsigned long long sched_clock(void)
{ {
return clocksource_cyc2ns(get_cycles(), return mult_frac(get_cycles(),
sched_clock_mult, SCHED_CLOCK_SHIFT); sched_clock_mult, 1ULL << SCHED_CLOCK_SHIFT);
} }
int setup_profiling_timer(unsigned int multiplier) int setup_profiling_timer(unsigned int multiplier)
......
...@@ -40,8 +40,8 @@ GCOV_PROFILE := n ...@@ -40,8 +40,8 @@ GCOV_PROFILE := n
UBSAN_SANITIZE :=n UBSAN_SANITIZE :=n
LDFLAGS := -m elf_$(UTS_MACHINE) LDFLAGS := -m elf_$(UTS_MACHINE)
ifeq ($(CONFIG_RELOCATABLE),y) # Compressed kernel should be built as PIE since it may be loaded at any
# If kernel is relocatable, build compressed kernel as PIE. # address by the bootloader.
ifeq ($(CONFIG_X86_32),y) ifeq ($(CONFIG_X86_32),y)
LDFLAGS += $(call ld-option, -pie) $(call ld-option, --no-dynamic-linker) LDFLAGS += $(call ld-option, -pie) $(call ld-option, --no-dynamic-linker)
else else
...@@ -51,7 +51,6 @@ else ...@@ -51,7 +51,6 @@ else
LDFLAGS += $(shell $(LD) --help 2>&1 | grep -q "\-z noreloc-overflow" \ LDFLAGS += $(shell $(LD) --help 2>&1 | grep -q "\-z noreloc-overflow" \
&& echo "-z noreloc-overflow -pie --no-dynamic-linker") && echo "-z noreloc-overflow -pie --no-dynamic-linker")
endif endif
endif
LDFLAGS_vmlinux := -T LDFLAGS_vmlinux := -T
hostprogs-y := mkpiggy hostprogs-y := mkpiggy
......
...@@ -87,6 +87,12 @@ int validate_cpu(void) ...@@ -87,6 +87,12 @@ int validate_cpu(void)
return -1; return -1;
} }
if (CONFIG_X86_MINIMUM_CPU_FAMILY <= 4 && !IS_ENABLED(CONFIG_M486) &&
!has_eflag(X86_EFLAGS_ID)) {
printf("This kernel requires a CPU with the CPUID instruction. Build with CONFIG_M486=y to run on this CPU.\n");
return -1;
}
if (err_flags) { if (err_flags) {
puts("This kernel requires the following features " puts("This kernel requires the following features "
"not present on the CPU:\n"); "not present on the CPU:\n");
......
...@@ -662,7 +662,13 @@ static int __init amd_core_pmu_init(void) ...@@ -662,7 +662,13 @@ static int __init amd_core_pmu_init(void)
pr_cont("Fam15h "); pr_cont("Fam15h ");
x86_pmu.get_event_constraints = amd_get_event_constraints_f15h; x86_pmu.get_event_constraints = amd_get_event_constraints_f15h;
break; break;
case 0x17:
pr_cont("Fam17h ");
/*
* In family 17h, there are no event constraints in the PMC hardware.
* We fallback to using default amd_get_event_constraints.
*/
break;
default: default:
pr_err("core perfctr but no constraints; unknown hardware!\n"); pr_err("core perfctr but no constraints; unknown hardware!\n");
return -ENODEV; return -ENODEV;
......
...@@ -2352,7 +2352,7 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *ent ...@@ -2352,7 +2352,7 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *ent
frame.next_frame = 0; frame.next_frame = 0;
frame.return_address = 0; frame.return_address = 0;
if (!access_ok(VERIFY_READ, fp, 8)) if (!valid_user_frame(fp, sizeof(frame)))
break; break;
bytes = __copy_from_user_nmi(&frame.next_frame, fp, 4); bytes = __copy_from_user_nmi(&frame.next_frame, fp, 4);
...@@ -2362,9 +2362,6 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *ent ...@@ -2362,9 +2362,6 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *ent
if (bytes != 0) if (bytes != 0)
break; break;
if (!valid_user_frame(fp, sizeof(frame)))
break;
perf_callchain_store(entry, cs_base + frame.return_address); perf_callchain_store(entry, cs_base + frame.return_address);
fp = compat_ptr(ss_base + frame.next_frame); fp = compat_ptr(ss_base + frame.next_frame);
} }
...@@ -2413,7 +2410,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs ...@@ -2413,7 +2410,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
frame.next_frame = NULL; frame.next_frame = NULL;
frame.return_address = 0; frame.return_address = 0;
if (!access_ok(VERIFY_READ, fp, sizeof(*fp) * 2)) if (!valid_user_frame(fp, sizeof(frame)))
break; break;
bytes = __copy_from_user_nmi(&frame.next_frame, fp, sizeof(*fp)); bytes = __copy_from_user_nmi(&frame.next_frame, fp, sizeof(*fp));
...@@ -2423,9 +2420,6 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs ...@@ -2423,9 +2420,6 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
if (bytes != 0) if (bytes != 0)
break; break;
if (!valid_user_frame(fp, sizeof(frame)))
break;
perf_callchain_store(entry, frame.return_address); perf_callchain_store(entry, frame.return_address);
fp = (void __user *)frame.next_frame; fp = (void __user *)frame.next_frame;
} }
......
...@@ -1108,20 +1108,20 @@ static void setup_pebs_sample_data(struct perf_event *event, ...@@ -1108,20 +1108,20 @@ static void setup_pebs_sample_data(struct perf_event *event,
} }
/* /*
* We use the interrupt regs as a base because the PEBS record * We use the interrupt regs as a base because the PEBS record does not
* does not contain a full regs set, specifically it seems to * contain a full regs set, specifically it seems to lack segment
* lack segment descriptors, which get used by things like * descriptors, which get used by things like user_mode().
* user_mode().
* *
* In the simple case fix up only the IP and BP,SP regs, for * In the simple case fix up only the IP for PERF_SAMPLE_IP.
* PERF_SAMPLE_IP and PERF_SAMPLE_CALLCHAIN to function properly. *
* A possible PERF_SAMPLE_REGS will have to transfer all regs. * We must however always use BP,SP from iregs for the unwinder to stay
* sane; the record BP,SP can point into thin air when the record is
* from a previous PMI context or an (I)RET happend between the record
* and PMI.
*/ */
*regs = *iregs; *regs = *iregs;
regs->flags = pebs->flags; regs->flags = pebs->flags;
set_linear_ip(regs, pebs->ip); set_linear_ip(regs, pebs->ip);
regs->bp = pebs->bp;
regs->sp = pebs->sp;
if (sample_type & PERF_SAMPLE_REGS_INTR) { if (sample_type & PERF_SAMPLE_REGS_INTR) {
regs->ax = pebs->ax; regs->ax = pebs->ax;
...@@ -1130,10 +1130,21 @@ static void setup_pebs_sample_data(struct perf_event *event, ...@@ -1130,10 +1130,21 @@ static void setup_pebs_sample_data(struct perf_event *event,
regs->dx = pebs->dx; regs->dx = pebs->dx;
regs->si = pebs->si; regs->si = pebs->si;
regs->di = pebs->di; regs->di = pebs->di;
regs->bp = pebs->bp;
regs->sp = pebs->sp;
regs->flags = pebs->flags; /*
* Per the above; only set BP,SP if we don't need callchains.
*
* XXX: does this make sense?
*/
if (!(sample_type & PERF_SAMPLE_CALLCHAIN)) {
regs->bp = pebs->bp;
regs->sp = pebs->sp;
}
/*
* Preserve PERF_EFLAGS_VM from set_linear_ip().
*/
regs->flags = pebs->flags | (regs->flags & PERF_EFLAGS_VM);
#ifndef CONFIG_X86_32 #ifndef CONFIG_X86_32
regs->r8 = pebs->r8; regs->r8 = pebs->r8;
regs->r9 = pebs->r9; regs->r9 = pebs->r9;
......
...@@ -319,9 +319,9 @@ static struct intel_uncore_box *uncore_alloc_box(struct intel_uncore_type *type, ...@@ -319,9 +319,9 @@ static struct intel_uncore_box *uncore_alloc_box(struct intel_uncore_type *type,
*/ */
static int uncore_pmu_event_init(struct perf_event *event); static int uncore_pmu_event_init(struct perf_event *event);
static bool is_uncore_event(struct perf_event *event) static bool is_box_event(struct intel_uncore_box *box, struct perf_event *event)
{ {
return event->pmu->event_init == uncore_pmu_event_init; return &box->pmu->pmu == event->pmu;
} }
static int static int
...@@ -340,7 +340,7 @@ uncore_collect_events(struct intel_uncore_box *box, struct perf_event *leader, ...@@ -340,7 +340,7 @@ uncore_collect_events(struct intel_uncore_box *box, struct perf_event *leader,
n = box->n_events; n = box->n_events;
if (is_uncore_event(leader)) { if (is_box_event(box, leader)) {
box->event_list[n] = leader; box->event_list[n] = leader;
n++; n++;
} }
...@@ -349,7 +349,7 @@ uncore_collect_events(struct intel_uncore_box *box, struct perf_event *leader, ...@@ -349,7 +349,7 @@ uncore_collect_events(struct intel_uncore_box *box, struct perf_event *leader,
return n; return n;
list_for_each_entry(event, &leader->sibling_list, group_entry) { list_for_each_entry(event, &leader->sibling_list, group_entry) {
if (!is_uncore_event(event) || if (!is_box_event(box, event) ||
event->state <= PERF_EVENT_STATE_OFF) event->state <= PERF_EVENT_STATE_OFF)
continue; continue;
......
...@@ -490,24 +490,12 @@ static int snb_uncore_imc_event_add(struct perf_event *event, int flags) ...@@ -490,24 +490,12 @@ static int snb_uncore_imc_event_add(struct perf_event *event, int flags)
snb_uncore_imc_event_start(event, 0); snb_uncore_imc_event_start(event, 0);
box->n_events++;
return 0; return 0;
} }
static void snb_uncore_imc_event_del(struct perf_event *event, int flags) static void snb_uncore_imc_event_del(struct perf_event *event, int flags)
{ {
struct intel_uncore_box *box = uncore_event_to_box(event);
int i;
snb_uncore_imc_event_stop(event, PERF_EF_UPDATE); snb_uncore_imc_event_stop(event, PERF_EF_UPDATE);
for (i = 0; i < box->n_events; i++) {
if (event == box->event_list[i]) {
--box->n_events;
break;
}
}
} }
int snb_pci2phy_map_init(int devid) int snb_pci2phy_map_init(int devid)
......
...@@ -113,7 +113,7 @@ struct debug_store { ...@@ -113,7 +113,7 @@ struct debug_store {
* Per register state. * Per register state.
*/ */
struct er_account { struct er_account {
raw_spinlock_t lock; /* per-core: protect structure */ raw_spinlock_t lock; /* per-core: protect structure */
u64 config; /* extra MSR config */ u64 config; /* extra MSR config */
u64 reg; /* extra MSR number */ u64 reg; /* extra MSR number */
atomic_t ref; /* reference count */ atomic_t ref; /* reference count */
......
...@@ -112,7 +112,7 @@ void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs, ...@@ -112,7 +112,7 @@ void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
for (; stack < stack_info.end; stack++) { for (; stack < stack_info.end; stack++) {
unsigned long real_addr; unsigned long real_addr;
int reliable = 0; int reliable = 0;
unsigned long addr = *stack; unsigned long addr = READ_ONCE_NOCHECK(*stack);
unsigned long *ret_addr_p = unsigned long *ret_addr_p =
unwind_get_return_address_ptr(&state); unwind_get_return_address_ptr(&state);
......
...@@ -521,14 +521,14 @@ void fpu__clear(struct fpu *fpu) ...@@ -521,14 +521,14 @@ void fpu__clear(struct fpu *fpu)
{ {
WARN_ON_FPU(fpu != &current->thread.fpu); /* Almost certainly an anomaly */ WARN_ON_FPU(fpu != &current->thread.fpu); /* Almost certainly an anomaly */
if (!use_eager_fpu() || !static_cpu_has(X86_FEATURE_FPU)) { fpu__drop(fpu);
/* FPU state will be reallocated lazily at the first use. */
fpu__drop(fpu); /*
} else { * Make sure fpstate is cleared and initialized.
if (!fpu->fpstate_active) { */
fpu__activate_curr(fpu); if (static_cpu_has(X86_FEATURE_FPU)) {
user_fpu_begin(); fpu__activate_curr(fpu);
} user_fpu_begin();
copy_init_fpstate_to_fpregs(); copy_init_fpstate_to_fpregs();
} }
} }
......
...@@ -665,14 +665,17 @@ __PAGE_ALIGNED_BSS ...@@ -665,14 +665,17 @@ __PAGE_ALIGNED_BSS
initial_pg_pmd: initial_pg_pmd:
.fill 1024*KPMDS,4,0 .fill 1024*KPMDS,4,0
#else #else
ENTRY(initial_page_table) .globl initial_page_table
initial_page_table:
.fill 1024,4,0 .fill 1024,4,0
#endif #endif
initial_pg_fixmap: initial_pg_fixmap:
.fill 1024,4,0 .fill 1024,4,0
ENTRY(empty_zero_page) .globl empty_zero_page
empty_zero_page:
.fill 4096,1,0 .fill 4096,1,0
ENTRY(swapper_pg_dir) .globl swapper_pg_dir
swapper_pg_dir:
.fill 1024,4,0 .fill 1024,4,0
EXPORT_SYMBOL(empty_zero_page) EXPORT_SYMBOL(empty_zero_page)
......
...@@ -66,13 +66,36 @@ __init int create_simplefb(const struct screen_info *si, ...@@ -66,13 +66,36 @@ __init int create_simplefb(const struct screen_info *si,
{ {
struct platform_device *pd; struct platform_device *pd;
struct resource res; struct resource res;
unsigned long len; u64 base, size;
u32 length;
/* don't use lfb_size as it may contain the whole VMEM instead of only /*
* the part that is occupied by the framebuffer */ * If the 64BIT_BASE capability is set, ext_lfb_base will contain the
len = mode->height * mode->stride; * upper half of the base address. Assemble the address, then make sure
len = PAGE_ALIGN(len); * it is valid and we can actually access it.
if (len > (u64)si->lfb_size << 16) { */
base = si->lfb_base;
if (si->capabilities & VIDEO_CAPABILITY_64BIT_BASE)
base |= (u64)si->ext_lfb_base << 32;
if (!base || (u64)(resource_size_t)base != base) {
printk(KERN_DEBUG "sysfb: inaccessible VRAM base\n");
return -EINVAL;
}
/*
* Don't use lfb_size as IORESOURCE size, since it may contain the
* entire VMEM, and thus require huge mappings. Use just the part we
* need, that is, the part where the framebuffer is located. But verify
* that it does not exceed the advertised VMEM.
* Note that in case of VBE, the lfb_size is shifted by 16 bits for
* historical reasons.
*/
size = si->lfb_size;
if (si->orig_video_isVGA == VIDEO_TYPE_VLFB)
size <<= 16;
length = mode->height * mode->stride;
length = PAGE_ALIGN(length);
if (length > size) {
printk(KERN_WARNING "sysfb: VRAM smaller than advertised\n"); printk(KERN_WARNING "sysfb: VRAM smaller than advertised\n");
return -EINVAL; return -EINVAL;
} }
...@@ -81,8 +104,8 @@ __init int create_simplefb(const struct screen_info *si, ...@@ -81,8 +104,8 @@ __init int create_simplefb(const struct screen_info *si,
memset(&res, 0, sizeof(res)); memset(&res, 0, sizeof(res));
res.flags = IORESOURCE_MEM | IORESOURCE_BUSY; res.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
res.name = simplefb_resname; res.name = simplefb_resname;
res.start = si->lfb_base; res.start = base;
res.end = si->lfb_base + len - 1; res.end = res.start + length - 1;
if (res.end <= res.start) if (res.end <= res.start)
return -EINVAL; return -EINVAL;
......
...@@ -7,11 +7,13 @@ ...@@ -7,11 +7,13 @@
unsigned long unwind_get_return_address(struct unwind_state *state) unsigned long unwind_get_return_address(struct unwind_state *state)
{ {
unsigned long addr = READ_ONCE_NOCHECK(*state->sp);
if (unwind_done(state)) if (unwind_done(state))
return 0; return 0;
return ftrace_graph_ret_addr(state->task, &state->graph_idx, return ftrace_graph_ret_addr(state->task, &state->graph_idx,
*state->sp, state->sp); addr, state->sp);
} }
EXPORT_SYMBOL_GPL(unwind_get_return_address); EXPORT_SYMBOL_GPL(unwind_get_return_address);
...@@ -23,8 +25,10 @@ bool unwind_next_frame(struct unwind_state *state) ...@@ -23,8 +25,10 @@ bool unwind_next_frame(struct unwind_state *state)
return false; return false;
do { do {
unsigned long addr = READ_ONCE_NOCHECK(*state->sp);
for (state->sp++; state->sp < info->end; state->sp++) for (state->sp++; state->sp < info->end; state->sp++)
if (__kernel_text_address(*state->sp)) if (__kernel_text_address(addr))
return true; return true;
state->sp = info->next_sp; state->sp = info->next_sp;
......
...@@ -2105,16 +2105,10 @@ static int em_iret(struct x86_emulate_ctxt *ctxt) ...@@ -2105,16 +2105,10 @@ static int em_iret(struct x86_emulate_ctxt *ctxt)
static int em_jmp_far(struct x86_emulate_ctxt *ctxt) static int em_jmp_far(struct x86_emulate_ctxt *ctxt)
{ {
int rc; int rc;
unsigned short sel, old_sel; unsigned short sel;
struct desc_struct old_desc, new_desc; struct desc_struct new_desc;
const struct x86_emulate_ops *ops = ctxt->ops;
u8 cpl = ctxt->ops->cpl(ctxt); u8 cpl = ctxt->ops->cpl(ctxt);
/* Assignment of RIP may only fail in 64-bit mode */
if (ctxt->mode == X86EMUL_MODE_PROT64)
ops->get_segment(ctxt, &old_sel, &old_desc, NULL,
VCPU_SREG_CS);
memcpy(&sel, ctxt->src.valptr + ctxt->op_bytes, 2); memcpy(&sel, ctxt->src.valptr + ctxt->op_bytes, 2);
rc = __load_segment_descriptor(ctxt, sel, VCPU_SREG_CS, cpl, rc = __load_segment_descriptor(ctxt, sel, VCPU_SREG_CS, cpl,
...@@ -2124,12 +2118,10 @@ static int em_jmp_far(struct x86_emulate_ctxt *ctxt) ...@@ -2124,12 +2118,10 @@ static int em_jmp_far(struct x86_emulate_ctxt *ctxt)
return rc; return rc;
rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc); rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc);
if (rc != X86EMUL_CONTINUE) { /* Error handling is not implemented. */
WARN_ON(ctxt->mode != X86EMUL_MODE_PROT64); if (rc != X86EMUL_CONTINUE)
/* assigning eip failed; restore the old cs */ return X86EMUL_UNHANDLEABLE;
ops->set_segment(ctxt, old_sel, &old_desc, 0, VCPU_SREG_CS);
return rc;
}
return rc; return rc;
} }
...@@ -2189,14 +2181,8 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt) ...@@ -2189,14 +2181,8 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt)
{ {
int rc; int rc;
unsigned long eip, cs; unsigned long eip, cs;
u16 old_cs;
int cpl = ctxt->ops->cpl(ctxt); int cpl = ctxt->ops->cpl(ctxt);
struct desc_struct old_desc, new_desc; struct desc_struct new_desc;
const struct x86_emulate_ops *ops = ctxt->ops;
if (ctxt->mode == X86EMUL_MODE_PROT64)
ops->get_segment(ctxt, &old_cs, &old_desc, NULL,
VCPU_SREG_CS);
rc = emulate_pop(ctxt, &eip, ctxt->op_bytes); rc = emulate_pop(ctxt, &eip, ctxt->op_bytes);
if (rc != X86EMUL_CONTINUE) if (rc != X86EMUL_CONTINUE)
...@@ -2213,10 +2199,10 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt) ...@@ -2213,10 +2199,10 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt)
if (rc != X86EMUL_CONTINUE) if (rc != X86EMUL_CONTINUE)
return rc; return rc;
rc = assign_eip_far(ctxt, eip, &new_desc); rc = assign_eip_far(ctxt, eip, &new_desc);
if (rc != X86EMUL_CONTINUE) { /* Error handling is not implemented. */
WARN_ON(ctxt->mode != X86EMUL_MODE_PROT64); if (rc != X86EMUL_CONTINUE)
ops->set_segment(ctxt, old_cs, &old_desc, 0, VCPU_SREG_CS); return X86EMUL_UNHANDLEABLE;
}
return rc; return rc;
} }
......
...@@ -94,7 +94,7 @@ static unsigned long ioapic_read_indirect(struct kvm_ioapic *ioapic, ...@@ -94,7 +94,7 @@ static unsigned long ioapic_read_indirect(struct kvm_ioapic *ioapic,
static void rtc_irq_eoi_tracking_reset(struct kvm_ioapic *ioapic) static void rtc_irq_eoi_tracking_reset(struct kvm_ioapic *ioapic)
{ {
ioapic->rtc_status.pending_eoi = 0; ioapic->rtc_status.pending_eoi = 0;
bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPUS); bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPU_ID);
} }
static void kvm_rtc_eoi_tracking_restore_all(struct kvm_ioapic *ioapic); static void kvm_rtc_eoi_tracking_restore_all(struct kvm_ioapic *ioapic);
......
...@@ -42,13 +42,13 @@ struct kvm_vcpu; ...@@ -42,13 +42,13 @@ struct kvm_vcpu;
struct dest_map { struct dest_map {
/* vcpu bitmap where IRQ has been sent */ /* vcpu bitmap where IRQ has been sent */
DECLARE_BITMAP(map, KVM_MAX_VCPUS); DECLARE_BITMAP(map, KVM_MAX_VCPU_ID);
/* /*
* Vector sent to a given vcpu, only valid when * Vector sent to a given vcpu, only valid when
* the vcpu's bit in map is set * the vcpu's bit in map is set
*/ */
u8 vectors[KVM_MAX_VCPUS]; u8 vectors[KVM_MAX_VCPU_ID];
}; };
......
...@@ -41,6 +41,15 @@ static int kvm_set_pic_irq(struct kvm_kernel_irq_routing_entry *e, ...@@ -41,6 +41,15 @@ static int kvm_set_pic_irq(struct kvm_kernel_irq_routing_entry *e,
bool line_status) bool line_status)
{ {
struct kvm_pic *pic = pic_irqchip(kvm); struct kvm_pic *pic = pic_irqchip(kvm);
/*
* XXX: rejecting pic routes when pic isn't in use would be better,
* but the default routing table is installed while kvm->arch.vpic is
* NULL and KVM_CREATE_IRQCHIP can race with KVM_IRQ_LINE.
*/
if (!pic)
return -1;
return kvm_pic_set_irq(pic, e->irqchip.pin, irq_source_id, level); return kvm_pic_set_irq(pic, e->irqchip.pin, irq_source_id, level);
} }
...@@ -49,6 +58,10 @@ static int kvm_set_ioapic_irq(struct kvm_kernel_irq_routing_entry *e, ...@@ -49,6 +58,10 @@ static int kvm_set_ioapic_irq(struct kvm_kernel_irq_routing_entry *e,
bool line_status) bool line_status)
{ {
struct kvm_ioapic *ioapic = kvm->arch.vioapic; struct kvm_ioapic *ioapic = kvm->arch.vioapic;
if (!ioapic)
return -1;
return kvm_ioapic_set_irq(ioapic, e->irqchip.pin, irq_source_id, level, return kvm_ioapic_set_irq(ioapic, e->irqchip.pin, irq_source_id, level,
line_status); line_status);
} }
......
...@@ -138,7 +138,7 @@ static inline bool kvm_apic_map_get_logical_dest(struct kvm_apic_map *map, ...@@ -138,7 +138,7 @@ static inline bool kvm_apic_map_get_logical_dest(struct kvm_apic_map *map,
*mask = dest_id & 0xff; *mask = dest_id & 0xff;
return true; return true;
case KVM_APIC_MODE_XAPIC_CLUSTER: case KVM_APIC_MODE_XAPIC_CLUSTER:
*cluster = map->xapic_cluster_map[dest_id >> 4]; *cluster = map->xapic_cluster_map[(dest_id >> 4) & 0xf];
*mask = dest_id & 0xf; *mask = dest_id & 0xf;
return true; return true;
default: default:
......
...@@ -135,7 +135,12 @@ void __init early_fixup_exception(struct pt_regs *regs, int trapnr) ...@@ -135,7 +135,12 @@ void __init early_fixup_exception(struct pt_regs *regs, int trapnr)
if (early_recursion_flag > 2) if (early_recursion_flag > 2)
goto halt_loop; goto halt_loop;
if (regs->cs != __KERNEL_CS) /*
* Old CPUs leave the high bits of CS on the stack
* undefined. I'm not sure which CPUs do this, but at least
* the 486 DX works this way.
*/
if ((regs->cs & 0xFFFF) != __KERNEL_CS)
goto fail; goto fail;
/* /*
......
...@@ -28,4 +28,4 @@ obj-$(subst m,y,$(CONFIG_GPIO_PCA953X)) += platform_pcal9555a.o ...@@ -28,4 +28,4 @@ obj-$(subst m,y,$(CONFIG_GPIO_PCA953X)) += platform_pcal9555a.o
obj-$(subst m,y,$(CONFIG_GPIO_PCA953X)) += platform_tca6416.o obj-$(subst m,y,$(CONFIG_GPIO_PCA953X)) += platform_tca6416.o
# MISC Devices # MISC Devices
obj-$(subst m,y,$(CONFIG_KEYBOARD_GPIO)) += platform_gpio_keys.o obj-$(subst m,y,$(CONFIG_KEYBOARD_GPIO)) += platform_gpio_keys.o
obj-$(subst m,y,$(CONFIG_INTEL_MID_WATCHDOG)) += platform_wdt.o obj-$(subst m,y,$(CONFIG_INTEL_MID_WATCHDOG)) += platform_mrfld_wdt.o
/* /*
* platform_wdt.c: Watchdog platform library file * Intel Merrifield watchdog platform device library file
* *
* (C) Copyright 2014 Intel Corporation * (C) Copyright 2014 Intel Corporation
* Author: David Cohen <david.a.cohen@linux.intel.com> * Author: David Cohen <david.a.cohen@linux.intel.com>
...@@ -14,7 +14,9 @@ ...@@ -14,7 +14,9 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/platform_data/intel-mid_wdt.h> #include <linux/platform_data/intel-mid_wdt.h>
#include <asm/intel-mid.h> #include <asm/intel-mid.h>
#include <asm/intel_scu_ipc.h>
#include <asm/io_apic.h> #include <asm/io_apic.h>
#define TANGIER_EXT_TIMER0_MSI 15 #define TANGIER_EXT_TIMER0_MSI 15
...@@ -50,14 +52,34 @@ static struct intel_mid_wdt_pdata tangier_pdata = { ...@@ -50,14 +52,34 @@ static struct intel_mid_wdt_pdata tangier_pdata = {
.probe = tangier_probe, .probe = tangier_probe,
}; };
static int __init register_mid_wdt(void) static int wdt_scu_status_change(struct notifier_block *nb,
unsigned long code, void *data)
{ {
if (intel_mid_identify_cpu() == INTEL_MID_CPU_CHIP_TANGIER) { if (code == SCU_DOWN) {
wdt_dev.dev.platform_data = &tangier_pdata; platform_device_unregister(&wdt_dev);
return platform_device_register(&wdt_dev); return 0;
} }
return -ENODEV; return platform_device_register(&wdt_dev);
} }
static struct notifier_block wdt_scu_notifier = {
.notifier_call = wdt_scu_status_change,
};
static int __init register_mid_wdt(void)
{
if (intel_mid_identify_cpu() != INTEL_MID_CPU_CHIP_TANGIER)
return -ENODEV;
wdt_dev.dev.platform_data = &tangier_pdata;
/*
* We need to be sure that the SCU IPC is ready before watchdog device
* can be registered:
*/
intel_scu_notifier_add(&wdt_scu_notifier);
return 0;
}
rootfs_initcall(register_mid_wdt); rootfs_initcall(register_mid_wdt);
...@@ -214,7 +214,7 @@ static int hash_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, ...@@ -214,7 +214,7 @@ static int hash_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
ahash_request_set_crypt(&ctx->req, NULL, ctx->result, 0); ahash_request_set_crypt(&ctx->req, NULL, ctx->result, 0);
if (!result) { if (!result && !ctx->more) {
err = af_alg_wait_for_completion( err = af_alg_wait_for_completion(
crypto_ahash_init(&ctx->req), crypto_ahash_init(&ctx->req),
&ctx->completion); &ctx->completion);
......
...@@ -133,7 +133,6 @@ struct x509_certificate *x509_cert_parse(const void *data, size_t datalen) ...@@ -133,7 +133,6 @@ struct x509_certificate *x509_cert_parse(const void *data, size_t datalen)
return cert; return cert;
error_decode: error_decode:
kfree(cert->pub->key);
kfree(ctx); kfree(ctx);
error_no_ctx: error_no_ctx:
x509_free_certificate(cert); x509_free_certificate(cert);
......
...@@ -68,10 +68,6 @@ void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg, ...@@ -68,10 +68,6 @@ void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg,
sg = scatterwalk_ffwd(tmp, sg, start); sg = scatterwalk_ffwd(tmp, sg, start);
if (sg_page(sg) == virt_to_page(buf) &&
sg->offset == offset_in_page(buf))
return;
scatterwalk_start(&walk, sg); scatterwalk_start(&walk, sg);
scatterwalk_copychunks(buf, &walk, nbytes, out); scatterwalk_copychunks(buf, &walk, nbytes, out);
scatterwalk_done(&walk, out, 0); scatterwalk_done(&walk, out, 0);
......
...@@ -47,32 +47,15 @@ static void acpi_sleep_tts_switch(u32 acpi_state) ...@@ -47,32 +47,15 @@ static void acpi_sleep_tts_switch(u32 acpi_state)
} }
} }
static void acpi_sleep_pts_switch(u32 acpi_state) static int tts_notify_reboot(struct notifier_block *this,
{
acpi_status status;
status = acpi_execute_simple_method(NULL, "\\_PTS", acpi_state);
if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) {
/*
* OS can't evaluate the _PTS object correctly. Some warning
* message will be printed. But it won't break anything.
*/
printk(KERN_NOTICE "Failure in evaluating _PTS object\n");
}
}
static int sleep_notify_reboot(struct notifier_block *this,
unsigned long code, void *x) unsigned long code, void *x)
{ {
acpi_sleep_tts_switch(ACPI_STATE_S5); acpi_sleep_tts_switch(ACPI_STATE_S5);
acpi_sleep_pts_switch(ACPI_STATE_S5);
return NOTIFY_DONE; return NOTIFY_DONE;
} }
static struct notifier_block sleep_notifier = { static struct notifier_block tts_notifier = {
.notifier_call = sleep_notify_reboot, .notifier_call = tts_notify_reboot,
.next = NULL, .next = NULL,
.priority = 0, .priority = 0,
}; };
...@@ -916,9 +899,9 @@ int __init acpi_sleep_init(void) ...@@ -916,9 +899,9 @@ int __init acpi_sleep_init(void)
pr_info(PREFIX "(supports%s)\n", supported); pr_info(PREFIX "(supports%s)\n", supported);
/* /*
* Register the sleep_notifier to reboot notifier list so that the _TTS * Register the tts_notifier to reboot notifier list so that the _TTS
* and _PTS object can also be evaluated when the system enters S5. * object can also be evaluated when the system enters S5.
*/ */
register_reboot_notifier(&sleep_notifier); register_reboot_notifier(&tts_notifier);
return 0; return 0;
} }
...@@ -685,7 +685,7 @@ static void __init berlin2_clock_setup(struct device_node *np) ...@@ -685,7 +685,7 @@ static void __init berlin2_clock_setup(struct device_node *np)
} }
/* register clk-provider */ /* register clk-provider */
of_clk_add_hw_provider(np, of_clk_hw_onecell_get, &clk_data); of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_data);
return; return;
......
...@@ -382,7 +382,7 @@ static void __init berlin2q_clock_setup(struct device_node *np) ...@@ -382,7 +382,7 @@ static void __init berlin2q_clock_setup(struct device_node *np)
} }
/* register clk-provider */ /* register clk-provider */
of_clk_add_hw_provider(np, of_clk_hw_onecell_get, &clk_data); of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_data);
return; return;
......
...@@ -82,6 +82,6 @@ static void __init efm32gg_cmu_init(struct device_node *np) ...@@ -82,6 +82,6 @@ static void __init efm32gg_cmu_init(struct device_node *np)
hws[clk_HFPERCLKDAC0] = clk_hw_register_gate(NULL, "HFPERCLK.DAC0", hws[clk_HFPERCLKDAC0] = clk_hw_register_gate(NULL, "HFPERCLK.DAC0",
"HFXO", 0, base + CMU_HFPERCLKEN0, 17, 0, NULL); "HFXO", 0, base + CMU_HFPERCLKEN0, 17, 0, NULL);
of_clk_add_hw_provider(np, of_clk_hw_onecell_get, &clk_data); of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_data);
} }
CLK_OF_DECLARE(efm32ggcmu, "efm32gg,cmu", efm32gg_cmu_init); CLK_OF_DECLARE(efm32ggcmu, "efm32gg,cmu", efm32gg_cmu_init);
...@@ -191,6 +191,8 @@ static struct clk_div_table axi_div_table[] = { ...@@ -191,6 +191,8 @@ static struct clk_div_table axi_div_table[] = {
static SUNXI_CCU_DIV_TABLE(axi_clk, "axi", "cpu", static SUNXI_CCU_DIV_TABLE(axi_clk, "axi", "cpu",
0x050, 0, 3, axi_div_table, 0); 0x050, 0, 3, axi_div_table, 0);
#define SUN6I_A31_AHB1_REG 0x054
static const char * const ahb1_parents[] = { "osc32k", "osc24M", static const char * const ahb1_parents[] = { "osc32k", "osc24M",
"axi", "pll-periph" }; "axi", "pll-periph" };
...@@ -1230,6 +1232,16 @@ static void __init sun6i_a31_ccu_setup(struct device_node *node) ...@@ -1230,6 +1232,16 @@ static void __init sun6i_a31_ccu_setup(struct device_node *node)
val &= BIT(16); val &= BIT(16);
writel(val, reg + SUN6I_A31_PLL_MIPI_REG); writel(val, reg + SUN6I_A31_PLL_MIPI_REG);
/* Force AHB1 to PLL6 / 3 */
val = readl(reg + SUN6I_A31_AHB1_REG);
/* set PLL6 pre-div = 3 */
val &= ~GENMASK(7, 6);
val |= 0x2 << 6;
/* select PLL6 / pre-div */
val &= ~GENMASK(13, 12);
val |= 0x3 << 12;
writel(val, reg + SUN6I_A31_AHB1_REG);
sunxi_ccu_probe(node, reg, &sun6i_a31_ccu_desc); sunxi_ccu_probe(node, reg, &sun6i_a31_ccu_desc);
ccu_mux_notifier_register(pll_cpu_clk.common.hw.clk, ccu_mux_notifier_register(pll_cpu_clk.common.hw.clk,
......
...@@ -373,7 +373,7 @@ static void sun4i_get_apb1_factors(struct factors_request *req) ...@@ -373,7 +373,7 @@ static void sun4i_get_apb1_factors(struct factors_request *req)
else else
calcp = 3; calcp = 3;
calcm = (req->parent_rate >> calcp) - 1; calcm = (div >> calcp) - 1;
req->rate = (req->parent_rate >> calcp) / (calcm + 1); req->rate = (req->parent_rate >> calcp) / (calcm + 1);
req->m = calcm; req->m = calcm;
......
...@@ -270,8 +270,8 @@ static int check_vma(struct dax_dev *dax_dev, struct vm_area_struct *vma, ...@@ -270,8 +270,8 @@ static int check_vma(struct dax_dev *dax_dev, struct vm_area_struct *vma,
if (!dax_dev->alive) if (!dax_dev->alive)
return -ENXIO; return -ENXIO;
/* prevent private / writable mappings from being established */ /* prevent private mappings from being established */
if ((vma->vm_flags & (VM_NORESERVE|VM_SHARED|VM_WRITE)) == VM_WRITE) { if ((vma->vm_flags & VM_SHARED) != VM_SHARED) {
dev_info(dev, "%s: %s: fail, attempted private mapping\n", dev_info(dev, "%s: %s: fail, attempted private mapping\n",
current->comm, func); current->comm, func);
return -EINVAL; return -EINVAL;
......
...@@ -78,7 +78,9 @@ static int dax_pmem_probe(struct device *dev) ...@@ -78,7 +78,9 @@ static int dax_pmem_probe(struct device *dev)
nsio = to_nd_namespace_io(&ndns->dev); nsio = to_nd_namespace_io(&ndns->dev);
/* parse the 'pfn' info block via ->rw_bytes */ /* parse the 'pfn' info block via ->rw_bytes */
devm_nsio_enable(dev, nsio); rc = devm_nsio_enable(dev, nsio);
if (rc)
return rc;
altmap = nvdimm_setup_pfn(nd_pfn, &res, &__altmap); altmap = nvdimm_setup_pfn(nd_pfn, &res, &__altmap);
if (IS_ERR(altmap)) if (IS_ERR(altmap))
return PTR_ERR(altmap); return PTR_ERR(altmap);
......
...@@ -34,6 +34,7 @@ struct amdgpu_atpx { ...@@ -34,6 +34,7 @@ struct amdgpu_atpx {
static struct amdgpu_atpx_priv { static struct amdgpu_atpx_priv {
bool atpx_detected; bool atpx_detected;
bool bridge_pm_usable;
/* handle for device - and atpx */ /* handle for device - and atpx */
acpi_handle dhandle; acpi_handle dhandle;
acpi_handle other_handle; acpi_handle other_handle;
...@@ -205,7 +206,11 @@ static int amdgpu_atpx_validate(struct amdgpu_atpx *atpx) ...@@ -205,7 +206,11 @@ static int amdgpu_atpx_validate(struct amdgpu_atpx *atpx)
atpx->is_hybrid = false; atpx->is_hybrid = false;
if (valid_bits & ATPX_MS_HYBRID_GFX_SUPPORTED) { if (valid_bits & ATPX_MS_HYBRID_GFX_SUPPORTED) {
printk("ATPX Hybrid Graphics\n"); printk("ATPX Hybrid Graphics\n");
atpx->functions.power_cntl = false; /*
* Disable legacy PM methods only when pcie port PM is usable,
* otherwise the device might fail to power off or power on.
*/
atpx->functions.power_cntl = !amdgpu_atpx_priv.bridge_pm_usable;
atpx->is_hybrid = true; atpx->is_hybrid = true;
} }
...@@ -480,6 +485,7 @@ static int amdgpu_atpx_power_state(enum vga_switcheroo_client_id id, ...@@ -480,6 +485,7 @@ static int amdgpu_atpx_power_state(enum vga_switcheroo_client_id id,
*/ */
static bool amdgpu_atpx_pci_probe_handle(struct pci_dev *pdev) static bool amdgpu_atpx_pci_probe_handle(struct pci_dev *pdev)
{ {
struct pci_dev *parent_pdev = pci_upstream_bridge(pdev);
acpi_handle dhandle, atpx_handle; acpi_handle dhandle, atpx_handle;
acpi_status status; acpi_status status;
...@@ -494,6 +500,7 @@ static bool amdgpu_atpx_pci_probe_handle(struct pci_dev *pdev) ...@@ -494,6 +500,7 @@ static bool amdgpu_atpx_pci_probe_handle(struct pci_dev *pdev)
} }
amdgpu_atpx_priv.dhandle = dhandle; amdgpu_atpx_priv.dhandle = dhandle;
amdgpu_atpx_priv.atpx.handle = atpx_handle; amdgpu_atpx_priv.atpx.handle = atpx_handle;
amdgpu_atpx_priv.bridge_pm_usable = parent_pdev && parent_pdev->bridge_d3;
return true; return true;
} }
......
...@@ -2984,19 +2984,19 @@ static int smu7_get_pp_table_entry_callback_func_v0(struct pp_hwmgr *hwmgr, ...@@ -2984,19 +2984,19 @@ static int smu7_get_pp_table_entry_callback_func_v0(struct pp_hwmgr *hwmgr,
if (!(data->mc_micro_code_feature & DISABLE_MC_LOADMICROCODE) && memory_clock > data->highest_mclk) if (!(data->mc_micro_code_feature & DISABLE_MC_LOADMICROCODE) && memory_clock > data->highest_mclk)
data->highest_mclk = memory_clock; data->highest_mclk = memory_clock;
performance_level = &(ps->performance_levels
[ps->performance_level_count++]);
PP_ASSERT_WITH_CODE( PP_ASSERT_WITH_CODE(
(ps->performance_level_count < smum_get_mac_definition(hwmgr->smumgr, SMU_MAX_LEVELS_GRAPHICS)), (ps->performance_level_count < smum_get_mac_definition(hwmgr->smumgr, SMU_MAX_LEVELS_GRAPHICS)),
"Performance levels exceeds SMC limit!", "Performance levels exceeds SMC limit!",
return -EINVAL); return -EINVAL);
PP_ASSERT_WITH_CODE( PP_ASSERT_WITH_CODE(
(ps->performance_level_count <= (ps->performance_level_count <
hwmgr->platform_descriptor.hardwareActivityPerformanceLevels), hwmgr->platform_descriptor.hardwareActivityPerformanceLevels),
"Performance levels exceeds Driver limit!", "Performance levels exceeds Driver limit, Skip!",
return -EINVAL); return 0);
performance_level = &(ps->performance_levels
[ps->performance_level_count++]);
/* Performance levels are arranged from low to high. */ /* Performance levels are arranged from low to high. */
performance_level->memory_clock = memory_clock; performance_level->memory_clock = memory_clock;
......
...@@ -150,15 +150,14 @@ static void hdlcd_crtc_enable(struct drm_crtc *crtc) ...@@ -150,15 +150,14 @@ static void hdlcd_crtc_enable(struct drm_crtc *crtc)
clk_prepare_enable(hdlcd->clk); clk_prepare_enable(hdlcd->clk);
hdlcd_crtc_mode_set_nofb(crtc); hdlcd_crtc_mode_set_nofb(crtc);
hdlcd_write(hdlcd, HDLCD_REG_COMMAND, 1); hdlcd_write(hdlcd, HDLCD_REG_COMMAND, 1);
drm_crtc_vblank_on(crtc);
} }
static void hdlcd_crtc_disable(struct drm_crtc *crtc) static void hdlcd_crtc_disable(struct drm_crtc *crtc)
{ {
struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc); struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc);
if (!crtc->state->active) drm_crtc_vblank_off(crtc);
return;
hdlcd_write(hdlcd, HDLCD_REG_COMMAND, 0); hdlcd_write(hdlcd, HDLCD_REG_COMMAND, 0);
clk_disable_unprepare(hdlcd->clk); clk_disable_unprepare(hdlcd->clk);
} }
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册