- 16 9月, 2014 1 次提交
-
-
由 Victor Kamensky 提交于
Previous commits that dealt with get_user for 64bit type missed to export proper functions, so if get_user macro with particular target/value types are used by kernel module modpost would produce 'undefined!' error. Solution is to export all required functions. Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 9月, 2014 1 次提交
-
-
由 Punit Agrawal 提交于
According to the ARM ARMv7, explicit barriers are necessary when using synchronisation primitives such as SWP{B}. The use of these instructions does not automatically imply a barrier and any ordering requirements by the software must be explicitly expressed with the use of suitable barriers. Based on this, remove the barriers from SWP{B} emulation. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 9月, 2014 1 次提交
-
-
由 Sudeep Holla 提交于
Since commit 1dbfa187 ("ARM: irq migration: force migration off CPU going down") the ARM interrupt migration code on cpu offline calls irqchip.irq_set_affinity() with the argument force=true. At the point of this change the argument had no effect because it was not used by any interrupt chip driver and there was no semantics defined. This changed with commit 01f8fa4f ("genirq: Allow forcing cpu affinity of interrupts") which made the force argument useful to route interrupts to not yet online cpus without checking the target cpu against the cpu online mask. The following commit ffde1de6 ("irqchip: gic: Support forced affinity setting") implemented this for the GIC interrupt controller. As a consequence the ARM cpu offline irq migration fails if CPU0 is offlined, because CPU0 is still set in the affinity mask and the validataion against cpu online mask is skipped to the force argument being true. The following first_cpu(mask) selection always selects CPU0 as the target. Solve the issue by calling irq_set_affinity() with force=false from the CPU offline irq migration code so the GIC driver validates the affinity mask against CPU online mask and therefore removes CPU0 from the possible target candidates. Tested on TC2 hotpluging CPU0 in and out. Without this patch the system locks up as the IRQs are not migrated away from CPU0. Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: <stable@vger.kernel.org> # 3.10.x Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 8月, 2014 2 次提交
-
-
由 Mark Rutland 提交于
On revisions of Cortex-A15 prior to r3p3, a CLREX instruction at PL1 may falsely trigger a watchpoint exception, leading to potential data aborts during exception return and/or livelock. This patch resolves the issue in the following ways: - Replacing our uses of CLREX with a dummy STREX sequence instead (as we did for v6 CPUs). - Removing the clrex code from v7_exit_coherency_flush and derivatives, since this only exists as a minor performance improvement when non-cached exclusives are in use (Linux doesn't use these). Benchmarking on a variety of ARM cores revealed no measurable performance difference with this change applied, so the change is performed unconditionally and no new Kconfig entry is added. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Andrey Ryabinin 提交于
Kernel module build with GCOV profiling fails to load with the following error: $ insmod test_module.ko test_module: unknown relocation: 38 insmod: can't insert 'test_module.ko': invalid module format This happens because constructor pointers in the .init_array section have not supported R_ARM_TARGET1 relocation type. Documentation (ELF for the ARM Architecture) says: "The relocation must be processed either in the same way as R_ARM_REL32 or as R_ARM_ABS32: a virtual platform must specify which method is used." Since kernel expects to see absolute addresses in .init_array R_ARM_TARGET1 relocation type should be treated the same way as R_ARM_ABS32. Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 8月, 2014 2 次提交
-
-
由 Russell King 提交于
Add the memfd_create syscall to ARM. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Add the new getrandom syscall for ARM. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 8月, 2014 1 次提交
-
-
由 Nicolas Pitre 提交于
The strings used to list IPIs in /proc/interrupts are reused for tracing purposes. While at it, prevent a negative ipinr from escaping the range check in handle_IPI(). Link: http://lkml.kernel.org/p/1406318733-26754-4-git-send-email-nicolas.pitre@linaro.orgAcked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 02 8月, 2014 3 次提交
-
-
由 Omar Sandoval 提交于
The kgdb breakpoint hooks (kgdb_brk_fn and kgdb_compiled_brk_fn) should only be entered when a kgdb break instruction is executed from the kernel. Otherwise, if kgdb is enabled, a userspace program can cause the kernel to drop into the debugger by executing either KGDB_BREAKINST or KGDB_COMPILED_BREAK. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NOmar Sandoval <osandov@osandov.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Shawn Guo 提交于
With SCU standby enabled, SCU CLK will be turned off when all processors are in WFI mode. And the clock will be turned on when any processor leaves WFI mode. This behavior should be preferable in terms of power efficiency of system idle. So let's set the SCU standby bit to enable the support in function scu_enable(). Cortex-A9 earlier than r2p0 has no standby bit in SCU, so we need to skip setting the bit for those. Signed-off-by: NShawn Guo <shawn.guo@freescale.com> Acked-by: NSoren Brinkmann <soren.brinkmann@xilinx.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Shawn Guo 提交于
Use macro instead of magic number for SCU enable bit. Signed-off-by: NShawn Guo <shawn.guo@freescale.com> Acked-by: NSoren Brinkmann <soren.brinkmann@xilinx.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 7月, 2014 1 次提交
-
-
由 Laura Abbott 提交于
Commit 1c2f87c2 (ARM: 8025/1: Get rid of meminfo) dropped the upper bound on the number of memory banks that can be added as there was no technical need in the kernel. It turns out though, some bootloaders (specifically the arndale-octa exynos boards) may pass invalid memory information and rely on the kernel to not parse this data. This is a bug in the bootloader but we still need to work around this. Work around this by introducing a dt_fixup function. This function gets called before the flattened devicetree is scanned for memory and the like. In this fixup function for exynos, limit the maximum number of memory regions in the devicetree. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Tested-by: NAndreas Färber <afaerber@suse.de> [glikely: Added a comment and fixed up function name] Signed-off-by: NGrant Likely <grant.likely@linaro.org>
-
- 24 7月, 2014 1 次提交
-
-
由 Gregory Fong 提交于
Broadcom Brahma-B15 (r0p0..r0p2) is also affected by Cortex-A15 erratum 798181, so enable the workaround for Brahma-B15. Signed-off-by: NGregory Fong <gregory.0xf0@gmail.com> Acked-by: NMarc Carino <marc.ceeeee@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NBrian Norris <computersforpeace@gmail.com> Cc: Rob Herring <rob.herring@calxeda.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 7月, 2014 1 次提交
-
-
由 Kees Cook 提交于
Wires up the new seccomp syscall. Signed-off-by: NKees Cook <keescook@chromium.org> Reviewed-by: NOleg Nesterov <oleg@redhat.com>
-
- 18 7月, 2014 10 次提交
-
-
由 Sebastian Hesselbarth 提交于
commit 431a84b1 ("ARM: 8034/1: Disable preemption in iwmmxt_task_enable()") introduced macros {inc,dec}_preempt_count to iwmmxt_task_enable to make it run with preemption disabled. Unfortunately, other functions in iwmmxt.S also use concan_{save,dump,load} sections located in iwmmxt_task_enable() to deal with iWMMXt coprocessor. This causes an unbalanced preempt_count due to excessive dec_preempt_count and destroyed return addresses in callers of concan_ labels due to a register collision: Linux version 3.16.0-rc3-00062-gd92a333a-dirty (jef@armhf) (gcc version 4.8.3 (Debian 4.8.3-4) ) #5 PREEMPT Thu Jul 3 19:46:39 CEST 2014 CPU: ARMv7 Processor [560f5815] revision 5 (ARMv7), cr=10c5387d CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction cache Machine model: SolidRun CuBox ... PJ4 iWMMXt v2 coprocessor enabled. ... Unable to handle kernel paging request at virtual address fffffffe pgd = bb25c000 [fffffffe] *pgd=3bfde821, *pte=00000000, *ppte=00000000 Internal error: Oops: 80000007 [#1] PREEMPT ARM Modules linked in: CPU: 0 PID: 62 Comm: startpar Not tainted 3.16.0-rc3-00062-gd92a333a-dirty #5 task: bb230b80 ti: bb256000 task.ti: bb256000 PC is at 0xfffffffe LR is at iwmmxt_task_copy+0x44/0x4c pc : [<fffffffe>] lr : [<800130ac>] psr: 40000033 sp : bb257de8 ip : 00000013 fp : bb257ea4 r10: bb256000 r9 : fffffdfe r8 : 76e898e6 r7 : bb257ec8 r6 : bb256000 r5 : 7ea12760 r4 : 000000a0 r3 : ffffffff r2 : 00000003 r1 : bb257df8 r0 : 00000000 Flags: nZcv IRQs on FIQs on Mode SVC_32 ISA Thumb Segment user Control: 10c5387d Table: 3b25c019 DAC: 00000015 Process startpar (pid: 62, stack limit = 0xbb256248) This patch fixes the issue by moving concan_{save,dump,load} into separate code sections and make iwmmxt_task_enable() call them in the same way the other functions use concan_ symbols. The test for valid ownership is moved to concan_save and is safe for the other user of it, iwmmxt_task_disable(). The register collision is also resolved by moving concan_ symbols as {inc,dec}_preempt_count are now local to iwmmxt_task_enable(). Fixes: 431a84b1 ("ARM: 8034/1: Disable preemption in iwmmxt_task_enable()") Signed-off-by: NSebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NJean-Francois Moine <moinejf@free.fr> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
When the CPU has support for the byte and word exclusive operations, userspace should use them in preference to the SWP instructions. Detect the presence of these instructions by reading the ISAR CPU ID registers and adjust the ELF HWCAP mask appropriately. Note that ARM1136 < r1p0 has no ISAR4, so this is explicitly detected and the test disabled, leaving the current situation where HWCAP_SWP is set. Tested-by: NTony Lindgren <tony@atomide.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Previous CPUs do not have the ability to trap SWP instructions, so it's pointless initialising this code there. Tested-by: NTony Lindgren <tony@atomide.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Mark Rutland 提交于
Commit 78d7530a ("ARM: Clean up linker script using new linker script macros.") modified the arm kernel linker script to use the STABS_DEBUG macro, but left a .comment section definition. As STABS_DEBUG defines the .comment section in an identical way, the second section definition is redundant and can be removed. This patch removes the redundant .comment section definition. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nikolay Borisov 提交于
Use the newly-introduced frame_pointer macro to extract the correct FP based on whether we are in THUMB2 mode or not. Signed-off-by: NNikolay Borisov <Nikolay.Borisov@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nikolay Borisov 提交于
Make the unwind code use the correct API so that the frame pointer is extracted from the correct register. Signed-off-by: NNikolay Borisov <Nikolay.Borisov@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nikolay Borisov 提交于
Make use of the arm_get_current_stackframe api so that the frame pointer is correctly referenced in THUMB2 mode Signed-off-by: NNikolay Borisov <Nikolay.Borisov@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nikolay Borisov 提交于
Make the perf backend use the API so that it correctly references the FP when in THUMB2 mode Signed-off-by: NNikolay Borisov <Nikolay.Borisov@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
ARMv6 and greater introduced a new instruction ("bx") which can be used to return from function calls. Recent CPUs perform better when the "bx lr" instruction is used rather than the "mov pc, lr" instruction, and this sequence is strongly recommended to be used by the ARM architecture manual (section A.4.1.1). We provide a new macro "ret" with all its variants for the condition code which will resolve to the appropriate instruction. Rather than doing this piecemeal, and miss some instances, change all the "mov pc" instances to use the new macro, with the exception of the "movs" instruction and the kprobes code. This allows us to detect the "mov pc, lr" case and fix it up - and also gives us the possibility of deploying this for other registers depending on the CPU selection. Reported-by: NWill Deacon <will.deacon@arm.com> Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1 Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood Tested-by: NShawn Guo <shawn.guo@freescale.com> Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385 Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Ensure that platform maintainers check the CPU part number in the right manner: the CPU part number is meaningless without also checking the CPU implement(e|o)r (choose your preferred spelling!) Provide an interface which returns both the implementer and part number together, and update the definitions to include the implementer. Mark the old function as being deprecated... indeed, using the old function with the definitions will now always evaluate as false, so people must update their un-merged code to the new function. While this could be avoided by adding new definitions, we'd also have to create new names for them which would be awkward. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 7月, 2014 2 次提交
-
-
由 Li Liu 提交于
HSCTLR.EE is defined as bit[25] referring to arm manual DDI0606C.b(p1590). Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NLi Liu <john.liuli@huawei.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
In order to make way for the GICv3 registers, move the v2-specific registers to their own structure. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 09 7月, 2014 2 次提交
-
-
由 Jean Pihet 提交于
Under perf, the fp unwinding scheme requires access to user space memory and can provoke a pagefault via call to __copy_from_user_inatomic from user_backtrace. This unwinding can take place in response to an interrupt (__perf_event_overflow). This is undesirable as we may already have mmap_sem held for write. One example being a process that calls mprotect just as a the PMU counters overflow. An example that can provoke this behaviour: perf record -e event:tocapture --call-graph fp ./application_to_test This patch addresses this issue by disabling pagefaults briefly in user_backtrace (as is done in the other architectures: ARM64, x86, Sparc etc.). Without the patch a deadlock occurs when __perf_event_overflow is called while reading the data from the user space: [ INFO: possible recursive locking detected ] 3.16.0-rc2-00038-g0ed7ff6 #46 Not tainted --------------------------------------------- stress/1634 is trying to acquire lock: (&mm->mmap_sem){++++++}, at: [<c001dc04>] do_page_fault+0xa8/0x428 but task is already holding lock: (&mm->mmap_sem){++++++}, at: [<c00f4098>] SyS_mprotect+0xa8/0x1c8 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&mm->mmap_sem); lock(&mm->mmap_sem); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by stress/1634: #0: (&mm->mmap_sem){++++++}, at: [<c00f4098>] SyS_mprotect+0xa8/0x1c8 #1: (rcu_read_lock){......}, at: [<c00c29dc>] __perf_event_overflow+0x120/0x294 stack backtrace: CPU: 1 PID: 1634 Comm: stress Not tainted 3.16.0-rc2-00038-g0ed7ff6 #46 [<c0017c8c>] (unwind_backtrace) from [<c0012eec>] (show_stack+0x20/0x24) [<c0012eec>] (show_stack) from [<c04de914>] (dump_stack+0x7c/0x98) [<c04de914>] (dump_stack) from [<c006a360>] (__lock_acquire+0x1484/0x1cf0) [<c006a360>] (__lock_acquire) from [<c006b14c>] (lock_acquire+0xa4/0x11c) [<c006b14c>] (lock_acquire) from [<c04e3880>] (down_read+0x40/0x7c) [<c04e3880>] (down_read) from [<c001dc04>] (do_page_fault+0xa8/0x428) [<c001dc04>] (do_page_fault) from [<c00084ec>] (do_DataAbort+0x44/0xa8) [<c00084ec>] (do_DataAbort) from [<c0013a1c>] (__dabt_svc+0x3c/0x60) Exception stack(0xed7c5ae0 to 0xed7c5b28) 5ae0: ed7c5b5c b6dadff4 ffffffec 00000000 b6dadff4 ebc08000 00000000 ebc08000 5b00: 0000007e 00000000 ed7c4000 ed7c5b94 00000014 ed7c5b2c c001a438 c0236c60 5b20: 00000013 ffffffff [<c0013a1c>] (__dabt_svc) from [<c0236c60>] (__copy_from_user+0xa4/0x3a4) Acked-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NJean Pihet <jean.pihet@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jean Pihet 提交于
An event may occur when an mm is already released. As per commit 20afc60f 'x86, perf: Check that current->mm is alive before getting user callchain' Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJean Pihet <jean.pihet@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 7月, 2014 10 次提交
-
-
由 Mark Rutland 提交于
Currently the krait_pmu_{enable,disable}_event functions use the global cpu_pmu variable while all the other pmu enable/disable functions derive this from the event argument. This patch brings the Krait functions into line with the rest of the PMU backends by deriving the address of the pmu from the event argument. Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NStephen Boyd <sboyd@codeaurora.org> Tested-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
When described in DT, PMUs are given specific compatible strings (e.g. "arm,cortex-a15-pmu") which makes it very easy to reorganise the way individual PMUs are handled (i.e. we can easily split them into separate drivers). The same is not true of PMUs described in board files, which are all use the platform_device_id "arm-pmu" and must all be handled by the same driver. To enable splitting the ARMv6, ARMv7, and XScale PMU drivers we need board files to identify which variant they provide. As a first step, this patch adds new platform_device_id values: "armv6-pmu", "armv7-pmu, and "xscale-pmu". Once board files are moved over and all existing uses of "arm-pmu" are gone, we can split the existing driver apart. Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
The perf userspace tools can't handle dashes or spaces in PMU names, which conflicts with the current naming scheme in the arm perf backend. This prevents these PMUs from being accessed by name from the perf tools. Additionally the ARMv6 pmus are named "v6", which does not fully distinguish them in the sys/bus/event_source namespace. This patch renames the PMUs consistently to a lower case form with underscores, e.g. "armv6_1176", "armv7_cortex_a9". This is both readily accepted by today's perf tool, and far easier to type than the (apparently unused) convention in use previously. The OProfile name conversion code is updated to handle this. Due to a copy-paste error involving two "xscale1" entries, "xscale2" has never been matched by the name OProfile name mapping. While we're updating names, this is corrected. Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> [sachin: fixed missing semicolons in armv6 backend] Signed-off-by: NSachin Kamat <sachin.kamat@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Now that we have macros for declaring fully invalid event maps, put them to work for the XScale PMU event maps. While this necessitates repeating common indices, we no longer need to refer to *_UNSUPPORTED events at all, and it makes it possible for the even maps to fit on a single page on a reasonably sized monitor. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Now that we have macros for declaring fully invalid event maps, put them to work for all the ARMv6 PMU event maps. While this necessitates repeating common indices, we no longer need to refer to *_UNSUPPORTED events at all, and it makes it possible for the even maps to fit on a single page on a reasonably sized monitor. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Now that we have macros for declaring fully invalid event maps, put them to work for all the ARMv7 PMU event maps. While this necessitates repeating common indices, we no longer need to refer to *_UNSUPPORTED events at all, and it makes it possible for the even maps to fit on a single page on a reasonably sized monitor. Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jon Medhurst 提交于
Conditionally compile kprobes test cases for ARMv5 instructions to avoid compilation errors with ARMv4 targets like: /tmp/cc7Tx8ST.s:16740: Error: selected processor does not support ARM mode `clz r0,r0' Signed-off-by: NJon Medhurst <tixy@linaro.org>
-
由 Jon Medhurst 提交于
ARM data processing instructions which have a register specified shift are defined as UNPREDICTABLE if PC is used for any register, not just the shift value as the code was previous assuming. This issue manifests on A15 devices as either test case failures or undefined instructions aborts. Reported-by: NDavid Long <dave.long@linaro.org> Signed-off-by: NJon Medhurst <tixy@linaro.org>
-
由 Jon Medhurst 提交于
Due to a long-standing issue with Thumb symbol lookup [1] the jprobes tests fail when built into a kernel compiled as Thumb mode. (They work fine for ARM mode kernels or for Thumb when built as a loadable module.) Rather than have this problem terminate testing prematurely lets instead emit an error message and carry on with the main kprobes tests, delaying the final failure report until the end. [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2011-August/063026.htmlSigned-off-by: NJon Medhurst <tixy@linaro.org>
-
由 Guenter Roeck 提交于
Commit 143e1e28 (sched: Rework sched_domain topology definition) introduced a number of functions with a return value of 'const int'. gcc doesn't know what to do with that and, if the kernel is compiled with W=1, complains with the following warnings whenever sched.h is included. include/linux/sched.h:875:25: warning: type qualifiers ignored on function return type include/linux/sched.h:882:25: warning: type qualifiers ignored on function return type include/linux/sched.h:889:25: warning: type qualifiers ignored on function return type include/linux/sched.h:1002:21: warning: type qualifiers ignored on function return type Commits fb2aa855 (sched, ARM: Create a dedicated scheduler topology table) and 607b45e9 (sched, powerpc: Create a dedicated topology table) introduce the same warning in the arm and powerpc code. Drop 'const' from the function declarations to fix the problem. The fix for all three patches has to be applied together to avoid compilation failures for the affected architectures. Acked-by: NVincent Guittot <vincent.guittot@linaro.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Cc: Russell King <linux@arm.linux.org.uk> Cc: Paul Mackerras <paulus@samba.org> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1403658329-13196-1-git-send-email-linux@roeck-us.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 29 6月, 2014 1 次提交
-
-
由 Will Deacon 提交于
On the syscall tracing path, we call out to secure_computing() to allow seccomp to check the syscall number being attempted. As part of this, a SIGTRAP may be sent to the tracer and the syscall could be re-written by a subsequent SET_SYSCALL ptrace request. Unfortunately, this new syscall is ignored by the current code unless TIF_SYSCALL_TRACE is also set on the current thread. This patch slightly reworks the enter path of the syscall tracing code so that we always reload the syscall number from current_thread_info()->syscall after the potential ptrace traps. Acked-by: NKees Cook <keescook@chromium.org> Tested-by: NKees Cook <keescook@chromium.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 6月, 2014 1 次提交
-
-
由 Russell King 提交于
GCC 4.6.4 spits out the following warning when building perf_event_v7.c: arch/arm/kernel/perf_event_v7.c: In function 'krait_pmu_get_event_idx': arch/arm/kernel/perf_event_v7.c:1927:6: warning: 'bit' may be used uninitialized in this function While upgrading the version of gcc may solve this, the code can also be organised to be more efficient by not carrying more local variables than is necessary across the armv7pmu_get_event_idx function call. If we set 'bit' to -1 (which is invalid for clear_bit) we can use that as an indication whether we need to clear a bit after this function. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-