- 27 5月, 2015 2 次提交
-
-
由 Mark Rutland 提交于
In multi-cluster systems, the PMUs can be different across clusters, and so our logical PMU may not be able to schedule events on all CPUs. This patch adds a cpumask to encode which CPUs a PMU driver supports controlling events for, and limits the driver to scheduling events on those CPUs, and enabling and disabling the physical PMUs on those CPUs. The cpumask is built based on the interrupt-affinity property, and in the absence of such a property a homogenous system is assumed. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
To support multiple PMUs we'll need to pass the arm_pmu instance around. Update of_pmu_irq_cfg to take an arm_pmu, and acquire the platform device from this. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 21 4月, 2015 1 次提交
-
-
由 Russell King 提交于
Commit bf35706f ("ARM: 8314/1: replace PROCINFO embedded branch with relative offset") broke booting on nommu platforms as it didn't update the nommu boot code. This patch fixes that oversight. Fixes: bf35706f ("ARM: 8314/1: replace PROCINFO embedded branch with relative offset") Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 4月, 2015 2 次提交
-
-
由 Richard Weinberger 提交于
As execution domain support is gone we can remove signal translation from the signal code and remove exec_domain from thread_info. Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Richard Weinberger 提交于
The RISC OS personality seems to be unused and untested for a long time. It is doubtful whether this personality worked ever as expected. Let's rip it out. Signed-off-by: NRichard Weinberger <richard@nod.at> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 4月, 2015 1 次提交
-
-
由 Xunlei Pang 提交于
As part of addressing "y2038 problem" for in-kernel uses, this patch converts read_boot_clock() to read_boot_clock64() and read_persistent_clock() to read_persistent_clock64() using timespec64 by converting clock_access_fn to use timespec64. Signed-off-by: NXunlei Pang <pang.xunlei@linaro.org> Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Acked-by: Thierry Reding <treding@nvidia.com> (for tegra part) Cc: Russell King <rmk@dyn-67.arm.linux.org.uk> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1427945681-29972-7-git-send-email-john.stultz@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 02 4月, 2015 3 次提交
-
-
由 Geert Uytterhoeven 提交于
When trying to kexec into a new kernel on a platform where multiple CPU cores are present, but no SMP bringup code is available yet, the kexec_load system call fails with: kexec_load failed: Invalid argument The SMP test added to machine_kexec_prepare() in commit 2103f6cb ("ARM: 7807/1: kexec: validate CPU hotplug support") wants to prohibit kexec on SMP platforms where it cannot disable secondary CPUs. However, this test is too strict: if the secondary CPUs couldn't be enabled in the first place, there's no need to disable them later at kexec time. Hence skip the test in the absence of SMP bringup code. This allows to add all CPU cores to the DTS from the beginning, without having to implement SMP bringup first, improving DT compatibility. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Acked-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Move shutdown and reboot related code to a separate file, out of process.c. This helps to avoid polluting process.c with non-process related code. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Normally, when a CPU wants to clear a cache line to zero in the external L2 cache, it would generate bus cycles to write each word as it would do with any other data access. However, a Cortex A9 connected to a L2C-310 has a specific feature where the CPU can detect this operation, and signal that it wants to zero an entire cache line. This feature, known as Full Line of Zeros (FLZ), involves a non-standard AXI signalling mechanism which only the L2C-310 can properly interpret. There are separate enable bits in both the L2C-310 and the Cortex A9 - the L2C-310 needs to be enabled and have the FLZ enable bit set in the auxiliary control register before the Cortex A9 has this feature enabled. Unfortunately, the suspend code was not respecting this - it's not obvious from the code: swsusp_arch_suspend() cpu_suspend() /* saves the Cortex A9 auxiliary control register */ arch_save_image() soft_restart() /* turns off FLZ in Cortex A9, and disables L2C */ cpu_resume() /* restores the Cortex A9 registers, inc auxcr */ At this point, we end up with the L2C disabled, but the Cortex A9 with FLZ enabled - which means any memset() or zeroing of a full cache line will fail to take effect. A similar issue exists in the resume path, but it's slightly more complex: swsusp_arch_suspend() cpu_suspend() /* saves the Cortex A9 auxiliary control register */ arch_save_image() /* image with A9 auxcr saved */ ... swsusp_arch_resume() call_with_stack() arch_restore_image() /* restores image with A9 auxcr saved above */ soft_restart() /* turns off FLZ in Cortex A9, and disables L2C */ cpu_resume() /* restores the Cortex A9 registers, inc auxcr */ Again, here we end up with the L2C disabled, but Cortex A9 FLZ enabled. There's no need to turn off the L2C in either of these two paths; there are benefits from not doing so - for example, the page copies will be faster with the L2C enabled. Hence, fix this by providing a variant of soft_restart() which can be used without turning the L2 cache controller off, and use it in both of these paths to keep the L2C enabled across the respective resume transitions. Fixes: 8ef418c7 ("ARM: l2c: trial at enabling some Cortex-A9 optimisations") Reported-by: NSean Cross <xobs@kosagi.com> Tested-by: NSean Cross <xobs@kosagi.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 3月, 2015 3 次提交
-
-
由 Ard Biesheuvel 提交于
Move cpu_resume() to the .text section where it belongs. Change the adr reference to sleep_save_sp to an explicit PC relative reference so sleep_save_sp itself can remain in .data. This helps prevent linker failure on large kernels, as the code in the .data section may be too far away to be in range for normal b/bl instructions. Reviewed-by: NNicolas Pitre <nico@linaro.org> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
This moves all fixup snippets to the .text.fixup section, which is a special section that gets emitted along with the .text section for each input object file, i.e., the snippets are kept much closer to the code they refer to, which helps prevent linker failure on large kernels. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Mark Rutland 提交于
arm64 builds with GCC 5 have caused the __asmeq assertions in the PSCI calling code to fire, so move the ARM PSCI calls out of line into their own assembly file for consistency and to safeguard against the same issue occuring with the 32-bit toolchain. [will: brought into line with arm64 implementation] Reported-by: NAndy Whitcroft <apw@canonical.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 3月, 2015 1 次提交
-
-
由 Uwe Kleine-König 提交于
When the patch for e16343c4 (ARM: 8160/1: drop warning about return_address not using unwind tables) was created there was still more code in said branch. Probably this simplification was just missed during conflict resolution when the patch was applied. Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 28 3月, 2015 7 次提交
-
-
由 Ard Biesheuvel 提交于
When running the 32-bit ARM kernel on ARMv8 capable bare metal (e.g., 32-bit Android userland and kernel on a Cortex-A53), or as a KVM guest on a 64-bit host, we should advertise the availability of the Crypto instructions, so that userland libraries such as OpenSSL may use them. (Support for the v8 Crypto instructions in the 32-bit build was added to OpenSSL more than six months ago) This adds the ID feature bit detection, and sets elf_hwcap2 accordingly. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
The various CPU feature registers consist of 4-bit blocks that represent signed quantities, whose positive values represent incremental features, and whose negative values are reserved. To improve forward compatibility, update the feature detection code to take possible future higher values into account, but ignore negative values. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
This moves the .idmap.text section closer to .head.text, so that relative branches are less likely to go out of range if the kernel text gets bigger. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
This patch replaces the 'branch to setup()' instructions embedded in the PROCINFO structs with the offset to that setup function relative to the base of the struct. This preserves the position independent nature of that field, but uses a data item rather than an instruction. This is mainly done to prevent linker failures on large kernels, where the setup function is out of reach for the branch. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nathan Lynch 提交于
Allow users to enable the vdso in Kconfig; include the vdso in the build if CONFIG_VDSO is enabled. Add 'vdso_install' target. Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nathan Lynch 提交于
Initialize the VDSO page list at boot, install the VDSO mapping at exec time, and update the data page during timer ticks. This code is not built if CONFIG_VDSO is not enabled. Account for the VDSO length when randomizing the offset from the stack. The [vdso] and [vvar] pages are placed immediately following the sigpage with separate _install_special_mapping calls. We want to "penalize" systems lacking the arch timer as little as possible. Previous versions of this code installed the VDSO unconditionally and unmodified, making it a measurably slower way for glibc to invoke the real syscalls on such systems. E.g. calling gettimeofday via glibc goes from ~560ns to ~630ns on i.MX6Q. If we can indicate to glibc that the time-related APIs in the VDSO are not accelerated, glibc can continue to invoke the syscalls directly instead of dispatching through the VDSO only to fall back to the slow path. Thus, if the architected timer is unusable for whatever reason, patch the VDSO at boot time so that symbol lookups for gettimeofday and clock_gettime return NULL. (This is similar to what powerpc does and borrows code from there.) This allows glibc to perform the syscall directly instead of passing control to the VDSO, which minimizes the penalty. In my measurements the time taken for a gettimeofday call via glibc goes from ~560ns to ~580ns (again on i.MX6Q), and this is solely due to adding a test and branch to glibc's gettimeofday syscall wrapper. An alternative to patching the VDSO at boot would be to not install the VDSO at all when the arch timer isn't usable. Another alternative is to include a separate "dummy" vdso.so without gettimeofday and clock_gettime, which would be selected at boot time. Either of these would get cumbersome if the VDSO were to gain support for an API such as getcpu which is unrelated to arch timer support. Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nathan Lynch 提交于
Place VDSO-related user-space code in arch/arm/kernel/vdso/. It is almost completely written in C with some assembly helpers to load the data page address, sample the counter, and fall back to system calls when necessary. The VDSO can service gettimeofday and clock_gettime when CONFIG_ARM_ARCH_TIMER is enabled and the architected timer is present (and correctly configured). It reads the CP15-based virtual counter to compute high-resolution timestamps. Of particular note is that a post-processing step ("vdsomunge") is necessary to produce a shared object which is architecturally allowed to be used by both soft- and hard-float EABI programs. The 2012 edition of the ARM ABI defines Tag_ABI_VFP_args = 3 "Code is compatible with both the base and VFP variants; the user did not permit non-variadic functions to pass FP parameters/results." Unfortunately current toolchains do not support this tag, which is ideally what we would use. The best available option is to ensure that both EF_ARM_ABI_FLOAT_SOFT and EF_ARM_ABI_FLOAT_HARD are unset in the ELF header's e_flags, indicating that the shared object is "old" and should be accepted for backward compatibility's sake. While binutils < 2.24 appear to produce a vdso.so with both flags clear, 2.24 always sets EF_ARM_ABI_FLOAT_SOFT, with no way to inhibit this behavior. So we have to fix things up with a custom post-processing step. In fact, the VDSO code in glibc does much less validation (including checking these flags) than the code for handling conventional file-backed shared libraries, so this is a bit moot unless glibc's VDSO code becomes more strict. Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 3月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
Older binutils do not support expressions involving the values of external symbols so just round up the HYP region to the page size. Tested-by: NSimon Horman <horms+renesas@verge.net.au> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> [will: when will this ever end?!] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 3月, 2015 2 次提交
-
-
由 Ard Biesheuvel 提交于
Using ASSERT() with an expression that involves a symbol that is only supplied through a PROVIDE() definition in the linker script itself is apparently not supported by some older versions of binutils. So instead, rewrite the expression so that only the section boundaries __hyp_idmap_text_start and __hyp_idmap_text_end are used. Note that this reverts the fix in 06f75a1f ("ARM, arm64: kvm: get rid of the bounce page") for the ASSERT() being triggered erroneously when unrelated linker emitted veneers happen to end up in the HYP idmap region. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Daniel Lezcano 提交于
Add kernel-doc format documentation in the code. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
-
- 24 3月, 2015 3 次提交
-
-
由 Will Deacon 提交于
Historically, the PMU devicetree bindings have expected SPIs to be listed in order of *logical* CPU number. This is problematic for bootloaders, especially when the boot CPU (logical ID 0) isn't listed first in the devicetree. This patch adds a new optional property, interrupt-affinity, to the PMU node which allows the interrupt affinity to be described using a list of phandled to CPU nodes, with each entry in the list corresponding to the SPI at the same index in the interrupts property. Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Daniel Lezcano 提交于
The current state of the different cpuidle drivers is the different PM operations are passed via the platform_data using the platform driver paradigm. This approach allowed to split the low level PM code from the arch specific and the generic cpuidle code. Unfortunately there are complaints about this approach as, in the context of the single kernel image, we have multiple drivers loaded in memory for nothing and the platform driver is not adequate for cpuidle. This patch provides a common interface via cpuidle ops for all new cpuidle driver and a definition for the device tree. It will allow with the next patches to a have a common definition with ARM64 and share the same cpuidle driver. The code is optimized to use the __init section intensively in order to reduce the memory footprint after the driver is initialized and unify the function names with ARM64. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NKevin Hilman <khilman@linaro.org> Acked-by: NRob Herring <robherring2@gmail.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Daniel Lezcano 提交于
The cpu_do_idle() function is always used by the cpuidle drivers. That led to have each driver including cpuidle.h and proc-fns.h, they are always paired. That makes a lot of duplicate headers inclusion. Instead of including both in each .c file, move the proc-fns.h header inclusion in the cpuidle.h header file directly, so we can save some line of code. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: NKevin Hilman <khilman@linaro.org> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
- 23 3月, 2015 2 次提交
-
-
由 Ard Biesheuvel 提交于
Commit 06f75a1f ("ARM, arm64: kvm: get rid of the bounce page") uses ld's builtin function LOG2CEIL() to align the KVM init code to a log2 upper bound of its size. However, this function turns out to be a fairly recent addition to binutils, which breaks the build for older toolchains. So instead, implement a replacement LOG2_ROUNDUP() using the C preprocessor. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Peter Zijlstra 提交于
The only reason CQM had to use a hard-coded pmu type was so it could use cqm_target in hw_perf_event. Do away with the {tp,bp,cqm}_target pointers and provide a non type specific one. This allows us to do away with that silly pmu type as well. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Vince Weaver <vince@deater.net> Cc: acme@kernel.org Cc: acme@redhat.com Cc: hpa@zytor.com Cc: jolsa@redhat.com Cc: kanaka.d.juvva@intel.com Cc: matt.fleming@intel.com Cc: tglx@linutronix.de Cc: torvalds@linux-foundation.org Cc: vikas.shivappa@linux.intel.com Link: http://lkml.kernel.org/r/20150305211019.GU21418@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 20 3月, 2015 2 次提交
-
-
由 Suzuki K. Poulose 提交于
The perf core implicitly rejects events spanning multiple HW PMUs, as in these cases the event->ctx will differ. However this validation is performed after pmu::event_init() is called in perf_init_event(), and thus pmu::event_init() may be called with a group leader from a different HW PMU. The ARM PMU driver does not take this fact into account, and when validating groups assumes that it can call to_arm_pmu(event->pmu) for any HW event. When the event in question is from another HW PMU this is wrong, and results in dereferencing garbage. This patch updates the ARM PMU driver to first test for and reject events from other PMUs, moving the to_arm_pmu and related logic after this test. Fixes a crash triggered by perf_fuzzer on Linux-4.0-rc2, with a CCI PMU present: --- CPU: 0 PID: 1527 Comm: perf_fuzzer Not tainted 4.0.0-rc2 #57 Hardware name: ARM-Versatile Express task: bd8484c0 ti: be676000 task.ti: be676000 PC is at 0xbf1bbc90 LR is at validate_event+0x34/0x5c pc : [<bf1bbc90>] lr : [<80016060>] psr: 00000013 ... [<80016060>] (validate_event) from [<80016198>] (validate_group+0x28/0x90) [<80016198>] (validate_group) from [<80016398>] (armpmu_event_init+0x150/0x218) [<80016398>] (armpmu_event_init) from [<800882e4>] (perf_try_init_event+0x30/0x48) [<800882e4>] (perf_try_init_event) from [<8008f544>] (perf_init_event+0x5c/0xf4) [<8008f544>] (perf_init_event) from [<8008f8a8>] (perf_event_alloc+0x2cc/0x35c) [<8008f8a8>] (perf_event_alloc) from [<8009015c>] (SyS_perf_event_open+0x498/0xa70) [<8009015c>] (SyS_perf_event_open) from [<8000e420>] (ret_fast_syscall+0x0/0x34) Code: bf1be000 bf1bb380 802a2664 00000000 (00000002) ---[ end trace 01aff0ff00926a0a ]--- Also cleans up the code to use the arm_pmu only when we know that we are dealing with an arm pmu event. Cc: Will Deacon <will.deacon@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NPeter Ziljstra (Intel) <peterz@infradead.org> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
The HYP init bounce page is a runtime construct that ensures that the HYP init code does not cross a page boundary. However, this is something we can do perfectly well at build time, by aligning the code appropriately. For arm64, we just align to 4 KB, and enforce that the code size is less than 4 KB, regardless of the chosen page size. For ARM, the whole code is less than 256 bytes, so we tweak the linker script to align at a power of 2 upper bound of the code size Note that this also fixes a benign off-by-one error in the original bounce page code, where a bounce page would be allocated unnecessarily if the code was exactly 1 page in size. On ARM, it also fixes an issue with very large kernels reported by Arnd Bergmann, where stub sections with linker emitted veneers could erroneously trigger the size/alignment ASSERT() in the linker script. Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 18 3月, 2015 4 次提交
-
-
由 Mason 提交于
Replace inline asm statement in __get_cpu_architecture() with equivalent macro invocation, i.e. read_cpuid_ext(CPUID_EXT_MMFR0); As an added bonus, this squashes a potential bug, described by Paul Walmsley in commit 067e710b ("ARM: 7801/1: prevent gcc 4.5 from reordering extended CP15 reads above is_smp() test"). Signed-off-by: NMarc Gonzalez <marc_gonzalez@sigmadesigns.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Stephen Boyd 提交于
Scorpion supports a set of local performance monitor event selection registers (LPM) sitting behind a cp15 based interface that extend the architected PMU events to include Scorpion CPU and Venum VFP specific events. To use these events the user is expected to program the lpm register with the event code shifted into the group they care about and then point the PMNx event at that region+group combo by writing a LPMn_GROUPx event. Add support for this hardware. Note: the raw event number is a pure software construct that allows us to map the multi-dimensional number space of regions, groups, and event codes into a flat event number space suitable for use by the perf framework. This is based on code originally written by Sheetal Sahasrabudhe, Ashwin Chaugule, and Neil Leeder [1]. [1] https://www.codeaurora.org/cgit/quic/la/kernel/msm/tree/arch/arm/kernel/perf_event_msm.c?h=msm-3.4 Cc: Mark Rutland <mark.rutland@arm.com> Cc: Neil Leeder <nleeder@codeaurora.org> Cc: Ashwin Chaugule <ashwinc@codeaurora.org> Cc: Sheetal Sahasrabudhe <sheetals@codeaurora.org> Cc: <devicetree@vger.kernel.org> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Stephen Boyd 提交于
The Krait specific PMxEVCNTCR register is unpredictable upon reset. Currently we clear the register before we setup an event, but we don't need to do that. Instead, we can iterate through all the events and clear them once when we reset the PMU, saving a write in the event setup path. Cc: Neil Leeder <nleeder@codeaurora.org> Cc: Ashwin Chaugule <ashwinc@codeaurora.org> Cc: Sheetal Sahasrabudhe <sheetals@codeaurora.org> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Stephen Boyd 提交于
Do some things to make the Krait PMU support code generic enough to be used by the Scorpion PMU support code. * Rename the venum register functions to be venum instead of krait specific because the same registers exist on Scorpion * Add some macros to decode our Krait specific event encoding that's the same on Scorpion (modulo an extra region). * Drop 'krait' from krait_clear_pmresrn_group() so it can be used by Scorpion code Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 12 3月, 2015 1 次提交
-
-
由 Christoffer Dall 提交于
We can definitely decide at run-time whether to use the GIC and timers or not, and the extra code and data structures that we allocate space for is really negligable with this config option, so I don't think it's worth the extra complexity of always having to define stub static inlines. The !CONFIG_KVM_ARM_VGIC/TIMER case is pretty much an untested code path anyway, so we're better off just getting rid of it. Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 23 2月, 2015 2 次提交
-
-
由 Lorenzo Pieralisi 提交于
The pci_mmap_page_range() API should be written to expect offset values representing PCI memory resource addresses as seen by user space, through the pci_resource_to_user() API. ARM relies on the standard implementation of pci_resource_to_user() which actually is an identity map and exports to user space PCI memory resources as they are stored in PCI devices resources structures, which represent CPU physical addresses (fixed-up using BUS to CPU address conversions) not PCI bus addresses. Therefore, on ARM platforms where the mapping between CPU and BUS address is not a 1:1 the current pci_mmap_page_range() implementation is erroneous, in that an additional shift is applied to an already fixed-up offset passed from userspace. Hence, this patch removes the mem_offset from the pgoff calculation since the offset as passed from user space already represents the CPU physical address corresponding to the resource to be mapped, ie no additional offset should be applied. Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
Currently, interworking calls on module boundaries are not supported, and are handled by the same error handling code path as non-interworking calls whose targets are simply out of range. Before modifying the handling of those out-of-range jump and call relocations in a subsequent patch, move the handling of interworking restrictions out of it. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 2月, 2015 1 次提交
-
-
由 Uwe Kleine-König 提交于
of_device_ids (i.e. compatible strings and the respective data) are not supposed to change at runtime. All functions working with of_device_ids provided by <linux/of.h> work with const of_device_ids. So mark the non-const structs in arch/arm as const, too. While at it also add some __initconst annotations. Acked-by: NJason Cooper <jason@lakedameon.net> Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 14 2月, 2015 1 次提交
-
-
由 Andrey Ryabinin 提交于
For instrumenting global variables KASan will shadow memory backing memory for modules. So on module loading we will need to allocate memory for shadow and map it at address in shadow that corresponds to the address allocated in module_alloc(). __vmalloc_node_range() could be used for this purpose, except it puts a guard hole after allocated area. Guard hole in shadow memory should be a problem because at some future point we might need to have a shadow memory at address occupied by guard hole. So we could fail to allocate shadow for module_alloc(). Now we have VM_NO_GUARD flag disabling guard page, so we need to pass into __vmalloc_node_range(). Add new parameter 'vm_flags' to __vmalloc_node_range() function. Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com> Cc: Yuri Gribov <tetra2005@gmail.com> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 2月, 2015 1 次提交
-
-
由 Andy Lutomirski 提交于
If an attacker can cause a controlled kernel stack overflow, overwriting the restart block is a very juicy exploit target. This is because the restart_block is held in the same memory allocation as the kernel stack. Moving the restart block to struct task_struct prevents this exploit by making the restart_block harder to locate. Note that there are other fields in thread_info that are also easy targets, at least on some architectures. It's also a decent simplification, since the restart code is more or less identical on all architectures. [james.hogan@imgtec.com: metag: align thread_info::supervisor_stack] Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: David Miller <davem@davemloft.net> Acked-by: NRichard Weinberger <richard@nod.at> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Haavard Skinnemoen <hskinnemoen@gmail.com> Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no> Cc: Steven Miao <realmz6@gmail.com> Cc: Mark Salter <msalter@redhat.com> Cc: Aurelien Jacquiot <a-jacquiot@ti.com> Cc: Mikael Starvik <starvik@axis.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Helge Deller <deller@gmx.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Tested-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Chen Liqin <liqin.linux@gmail.com> Cc: Lennox Wu <lennox.wu@gmail.com> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-