- 26 5月, 2014 2 次提交
-
-
由 Victor Kamensky 提交于
After instruction write into xol area, on ARM V7 architecture code need to flush dcache and icache to sync them up for given set of addresses. Having just 'flush_dcache_page(page)' call is not enough - it is possible to have stale instruction sitting in icache for given xol area slot address. Introduce arch_uprobe_ixol_copy weak function that by default calls uprobes copy_to_page function and than flush_dcache_page function and on ARM define new one that handles xol slot copy in ARM specific way flush_uprobe_xol_access function shares/reuses implementation with/of flush_ptrace_access function and takes care of writing instruction to user land address space on given variety of different cache types on ARM CPUs. Because flush_uprobe_xol_access does not have vma around flush_ptrace_access was split into two parts. First that retrieves set of condition from vma and common that receives those conditions as flags. Note ARM cache flush function need kernel address through which instruction write happened, so instead of using uprobes copy_to_page function changed code to explicitly map page and do memcpy. Note arch_uprobe_copy_ixol function, in similar way as copy_to_user_page function, has preempt_disable/preempt_enable. Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Acked-by: NOleg Nesterov <oleg@redhat.com> Reviewed-by: NDavid A. Long <dave.long@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Rob Herring 提交于
With large kernel builds such as allyesconfig exceeding maximum relative branch offsets, the init section will be too far away to branch to directly. This causes veneers to be added by the linker, but veneers don't work before the MMU is enabled. Fix this by moving __fixup_smp to the .head.text section as it is not very big. Signed-off-by: NRob Herring <robh@kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 5月, 2014 2 次提交
-
-
由 Russell King 提交于
When we unwind through an exception stack, include the saved PC value into the stack trace: this fills in an otherwise missed functions from the trace (as indicated below): [<c03f4424>] fec_enet_interrupt+0xa0/0xe8 [<c0066c0c>] handle_irq_event_percpu+0x68/0x228 [<c0066e18>] handle_irq_event+0x4c/0x6c [<c006a024>] handle_fasteoi_irq+0xac/0x198 [<c00664b0>] generic_handle_irq+0x4c/0x60 [<c000f014>] handle_IRQ+0x40/0x98 [<c0008554>] gic_handle_irq+0x30/0x64 [<c0012900>] __irq_svc+0x40/0x50 [<c0029030>] __do_softirq+0xe0/0x2fc <==== [<c0029500>] irq_exit+0xb0/0x100 [<c000f018>] handle_IRQ+0x44/0x98 [<c0008554>] gic_handle_irq+0x30/0x64 [<c0012900>] __irq_svc+0x40/0x50 [<c000f34c>] arch_cpu_idle+0x30/0x38 <==== [<c005e1e4>] cpu_startup_entry+0xac/0x214 [<c066297c>] rest_init+0x68/0x80 [<c08ccb10>] start_kernel+0x2fc/0x358 Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
While debugging the FEC ethernet driver using stacktrace, it was noticed that the stacktraces always begin as follows: [<c00117b4>] save_stack_trace_tsk+0x0/0x98 [<c0011870>] save_stack_trace+0x24/0x28 ... This is because the stack trace code includes the stack frames for itself. This is incorrect behaviour, and also leads to "skip" doing the wrong thing (which is the number of stack frames to avoid recording.) Perversely, it does the right thing when passed a non-current thread. Fix this by ensuring that we have a known constant number of frames above the main stack trace function, and always skip these. Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 4月, 2014 2 次提交
-
-
由 Catalin Marinas 提交于
The Undef abort handler in the kernel reads the undefined instruction from user space. If the page table was modified from another CPU, the user access could fail and do_page_fault() will be executed with interrupts disabled. This can potentially deadlock on ARM11MPCore or on Cortex-A15 with erratum 798181 workaround enabled (both implying IPI for TLB maintenance with page table lock held). This patch enables the IRQs in __und_usr before attempting to read the instruction from user space. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArun KS <getarunks@gmail.com> Cc: Hartley Sweeten <hsweeten@visionengravers.com> Cc: Ryan Mallon <rmallon@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
This patch is in preparation for calling the iwmmxt_task_enable() function with interrupts enabled. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 4月, 2014 2 次提交
-
-
由 Rabin Vincent 提交于
Make ftrace work with CONFIG_DEBUG_SET_MODULE_RONX by making module text writable around the place where ftrace does its work, like it is done on x86 in the patch which introduced CONFIG_DEBUG_SET_MODULE_RONX, 84e1c6bb ("x86: Add RO/NX protection for loadable kernel modules"). Tested-by: NMitchel Humpherys <mitchelh@codeaurora.org> Signed-off-by: NRabin Vincent <rabin@rab.in> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Sebastian Capella 提交于
Enable hibernation for ARM architectures and provide ARM architecture specific calls used during hibernation. The swsusp hibernation framework depends on the platform first having functional suspend/resume. Then, in order to enable hibernation on a given platform, a platform_hibernation_ops structure may need to be registered with the system in order to save/restore any SoC-specific / cpu specific state needing (re)init over a suspend-to-disk/resume-from-disk cycle. For example: - "secure" SoCs that have different sets of control registers and/or different CR reg access patterns. - SoCs with L2 caches as the activation sequence there is SoC-dependent; a full off-on cycle for L2 is not done by the hibernation support code. - SoCs requiring steps on wakeup _before_ the "generic" parts done by cpu_suspend / cpu_resume can work correctly. - SoCs having persistent state which is maintained during suspend and resume, but will be lost during the power off cycle after suspend-to-disk. This is a rebase/rework of Frank Hofmann's v5 hibernation patchset. Acked-by: NRuss Dill <Russ.Dill@ti.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Signed-off-by: NSebastian Capella <sebastian.capella@linaro.org> Acked-by: NPavel Machek <pavel@ucw.cz> Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> [fixed duplicate virt_to_pfn() definition --rmk] Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 4月, 2014 1 次提交
-
-
由 Mark Brown 提交于
Use kcalloc() and ULONG_MAX rather than open coding them. Signed-off-by: NMark Brown <broonie@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 4月, 2014 1 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 4月, 2014 2 次提交
-
-
由 Catalin Marinas 提交于
asm/assembler.h is a better place for this macro since it is used by asm files outside arch/arm/kernel/ Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArun KS <getarunks@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Chao Xie Linux 提交于
Check cpu id in pj4_cp0_init. So for no-PJ4 V7 cpus, pj4_cpu0_init just return. This fix will help to make the all the V7 cpus(PJ4 and no-PJ4) can use code. Signed-off-by: NChao Xie <chao.xie@marvell.com> Reviewed-by: NKevin Hilman <khilman@linaro.org> Tested-by: NKevin Hilman <khilman@linaro.org> Tested-by: NStephen Warren <swarren@nvidia.com> Reviewed-by: NStephen Warren <swarren@nvidia.com> Tested-by: NMatt Porter <mporter@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 4月, 2014 1 次提交
-
-
由 Russell King 提交于
arm_pm_restart(), arm_pm_idle() and soft_restart() are all declared in system_misc.h, but this file is not included in process.c. Add this missing include. Found via sparse: arch/arm/kernel/process.c:98:6: warning: symbol 'soft_restart' was not declared. Should it be static? arch/arm/kernel/process.c:127:6: warning: symbol 'arm_pm_restart' was not declared. Should it be static? arch/arm/kernel/process.c:134:6: warning: symbol 'arm_pm_idle' was not declared. Should it be static? Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 4月, 2014 2 次提交
-
-
由 Liu Hua 提交于
When we configure CONFIG_ARM_LPAE=y, pfn << PAGE_SHIFT will overflow if pfn >= 0x100000 in copy_oldmem_page. So use __pfn_to_phys for converting. Signed-off-by: NLiu Hua <sdu.liu@huawei.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Sebastian Capella 提交于
Use of tracers in local_irq_disable is causes abort loops when called with irqs disabled using a temporary stack. Replace local_irq_disable with raw_local_irq_disable instead to avoid tracers. Signed-off-by: NSebastian Capella <sebastian.capella@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 4月, 2014 1 次提交
-
-
由 Russell King 提交于
virt_to_page() is incredibly inefficient when virt-to-phys patching is enabled. This is because we end up with this calculation: page = &mem_map[asm virt_to_phys(addr) >> 12 - __pv_phys_offset >> 12] in assembly. The asm virt_to_phys() is equivalent this this operation: addr - PAGE_OFFSET + __pv_phys_offset and we can see that because this is assembly, the compiler has no chance to optimise some of that away. This should reduce down to: page = &mem_map[(addr - PAGE_OFFSET) >> 12] for the common cases. Permit the compiler to make this optimisation by giving it more of the information it needs - do this by providing a virt_to_pfn() macro. Another issue which makes this more complex is that __pv_phys_offset is a 64-bit type on all platforms. This is needlessly wasteful - if we store the physical offset as a PFN, we can save a lot of work having to deal with 64-bit values, which sometimes ends up producing incredibly horrid code: a4c: e3009000 movw r9, #0 a4c: R_ARM_MOVW_ABS_NC __pv_phys_offset a50: e3409000 movt r9, #0 ; r9 = &__pv_phys_offset a50: R_ARM_MOVT_ABS __pv_phys_offset a54: e3002000 movw r2, #0 a54: R_ARM_MOVW_ABS_NC __pv_phys_offset a58: e3402000 movt r2, #0 ; r2 = &__pv_phys_offset a58: R_ARM_MOVT_ABS __pv_phys_offset a5c: e5999004 ldr r9, [r9, #4] ; r9 = high word of __pv_phys_offset a60: e3001000 movw r1, #0 a60: R_ARM_MOVW_ABS_NC mem_map a64: e592c000 ldr ip, [r2] ; ip = low word of __pv_phys_offset Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 4月, 2014 5 次提交
-
-
由 Taras Kondratiuk 提交于
By default if no fill symbol is given to .align directive in a code section it fills gap with NOPs. If previous fragment is not instruction-aligned, additional pre-alignment is done by zero bytes before NOPs. These zero bytes are marked as data by special symbol $d in symbol table. Unfortunately GAS assumes that there is only code in the code section so it "puts back" code symbol $a at the end of this pre-alignment. So if there is some data after alignment it will be interpreted as code and will be swapped back to LE for BE8 system during a final linking. If explicit fill value is given to .align, the NOP-padding code is skipped and symbol table does not get messed-up. So the workaround for this issue: Use explicit fill value if data should be aligned in the code section. Acked-by: NBen Dooks <ben.dooks@codethink.co.uk> Acked-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NTaras Kondratiuk <taras.kondratiuk@linaro.org>
-
由 Ben Dooks 提交于
The kprobes test will build certain instructions incorrectly if building big endian as .word/.short output gets endian-swapped by the linker. Change to using <asm/opcodes.h> and __inst_thumbXX() to produce instructions. Acked-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> Signed-off-by: NTaras Kondratiuk <taras.kondratiuk@linaro.org>
-
由 Ben Dooks 提交于
The kprobes test will build certain instructions incorrectly if building big endian as .word output gets endian-swapped by the linker. Change to using <asm/opcodes.h> and __inst_arm() to produce instructions. Acked-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> [taras.kondratiuk@linaro.org: fixed unsupported coprocessor instructions] Signed-off-by: NTaras Kondratiuk <taras.kondratiuk@linaro.org>
-
由 Ben Dooks 提交于
Ensure we read instructions in the correct endian-ness by using the <asm/opcodes.h> helper to transform them as necessary. Acked-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> [taras.kondratiuk@linaro.org: fix next_instruction() function] Signed-off-by: NTaras Kondratiuk <taras.kondratiuk@linaro.org>
-
由 Ben Dooks 提交于
If we are running BE8, the data and instruction endianness do not match, so use <asm/opcodes.h> to correctly translate memory accesses into ARM instructions. Acked-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> [taras.kondratiuk@linaro.org: fixed Thumb instruction fetch order] Signed-off-by: NTaras Kondratiuk <taras.kondratiuk@linaro.org>
-
- 31 3月, 2014 1 次提交
-
-
由 Jeff Layton 提交于
Due to some unfortunate history, POSIX locks have very strange and unhelpful semantics. The thing that usually catches people by surprise is that they are dropped whenever the process closes any file descriptor associated with the inode. This is extremely problematic for people developing file servers that need to implement byte-range locks. Developers often need a "lock management" facility to ensure that file descriptors are not closed until all of the locks associated with the inode are finished. Additionally, "classic" POSIX locks are owned by the process. Locks taken between threads within the same process won't conflict with one another, which renders them useless for synchronization between threads. This patchset adds a new type of lock that attempts to address these issues. These locks conflict with classic POSIX read/write locks, but have semantics that are more like BSD locks with respect to inheritance and behavior on close. This is implemented primarily by changing how fl_owner field is set for these locks. Instead of having them owned by the files_struct of the process, they are instead owned by the filp on which they were acquired. Thus, they are inherited across fork() and are only released when the last reference to a filp is put. These new semantics prevent them from being merged with classic POSIX locks, even if they are acquired by the same process. These locks will also conflict with classic POSIX locks even if they are acquired by the same process or on the same file descriptor. The new locks are managed using a new set of cmd values to the fcntl() syscall. The initial implementation of this converts these values to "classic" cmd values at a fairly high level, and the details are not exposed to the underlying filesystem. We may eventually want to push this handing out to the lower filesystem code but for now I don't see any need for it. Also, note that with this implementation the new cmd values are only available via fcntl64() on 32-bit arches. There's little need to add support for legacy apps on a new interface like this. Signed-off-by: NJeff Layton <jlayton@redhat.com>
-
- 20 3月, 2014 1 次提交
-
-
由 Srivatsa S. Bhat 提交于
Subsystems that want to register CPU hotplug callbacks, as well as perform initialization for the CPUs that are already online, often do it as shown below: get_online_cpus(); for_each_online_cpu(cpu) init_cpu(cpu); register_cpu_notifier(&foobar_cpu_notifier); put_online_cpus(); This is wrong, since it is prone to ABBA deadlocks involving the cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently with CPU hotplug operations). Instead, the correct and race-free way of performing the callback registration is: cpu_notifier_register_begin(); for_each_online_cpu(cpu) init_cpu(cpu); /* Note the use of the double underscored version of the API */ __register_cpu_notifier(&foobar_cpu_notifier); cpu_notifier_register_done(); Fix the hw-breakpoint code in arm by using this latter form of callback registration. Cc: Russell King <linux@arm.linux.org.uk> Cc: Ingo Molnar <mingo@kernel.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 19 3月, 2014 13 次提交
-
-
由 Viresh Kumar 提交于
Two cpufreq notifiers CPUFREQ_RESUMECHANGE and CPUFREQ_SUSPENDCHANGE have not been used for some time, so remove them to clean up code a bit. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Reviewed-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> [rjw: Changelog] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 David A. Long 提交于
Using Rabin Vincent's ARM uprobes patches as a base, enable uprobes support on ARM. Caveats: - Thumb is not supported Signed-off-by: NRabin Vincent <rabin@rab.in> Signed-off-by: NDavid A. Long <dave.long@linaro.org>
-
由 David A. Long 提交于
Because the common underlying code for ARM kprobes and uprobes needs to share a common architecrure-specific context structure, and because the generic kprobes include file insists on defining this to a dummy structure when kprobes is not configured, a new common structure is required which can exist when uprobes is configured without kprobes. In this case kprobes will define a dummy structure, but without the define aliasing the two structure tags it will not affect uprobes and the shared probes code. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Add an emulate flag into the instruction interpreter, primarily for uprobes support. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Any more ARM kprobes/uprobes symbols which have "kprobe" in the name must be changed to the more generic "probes" or other non-kprobes specific symbol. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Change the name of kprobes_insn to probes_insn so it can be shared between kprobes and uprobes without confusion. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Change kprobe_emulate_none, kprobe_simulate_nop, and arm_kprobe_decode_init function names to something more appropriate for code being shared outside of the kprobes subsystem. Also, move the new arm_probes_decode_init declaration out of the kprobes.h include file and into the probes.h include file. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
In preparation for sharing the ARM kprobes instruction interpreting code with uprobes, make the symbols names less kprobes-specific. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Change the generic ARM probes code to pass in the opcode and architecture-specific structure separately instead of using struct kprobe, so we do not pollute code being used only for uprobes or other non-kprobes instruction interpretation. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Make the instruction interpreter call back to semantic action functions through a function pointer array provided by the invoker. The interpreter decodes the instructions into groups and uses the group number to index into the supplied array. kprobes and uprobes code will each supply their own array of functions. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Move the thumb version of the kprobes instruction parsing code into more generic files from where it can be used by uprobes and possibly other subsystems. The symbol names will be made more generic in a subsequent part of this patchset. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Move the arm version of the kprobes instruction parsing code into more generic files from where it can be used by uprobes and possibly other subsystems. The symbol names will be made more generic in a subsequent part of this patchset. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
由 David A. Long 提交于
Make sure includes in ARM kprobes sources are done explicitly. Do not rely on includes from other includes. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NJon Medhurst <tixy@linaro.org>
-
- 07 3月, 2014 2 次提交
-
-
由 Jiri Slaby 提交于
As the data parameter is not really used by any ftrace_dyn_arch_init, remove that from ftrace_dyn_arch_init. This also removes the addr local variable from ftrace_init which is now unused. Note the documentation was imprecise as it did not suggest to set (*data) to 0. Link: http://lkml.kernel.org/r/1393268401-24379-4-git-send-email-jslaby@suse.cz Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-arch@vger.kernel.org Signed-off-by: NJiri Slaby <jslaby@suse.cz> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Jiri Slaby 提交于
No architecture uses the "data" parameter in ftrace_dyn_arch_init() in any way, it just sets the value to 0. And this is used as a return value in the caller -- ftrace_init, which just checks the retval against zero. Note there is also "return 0" in every ftrace_dyn_arch_init. So it is enough to check the retval and remove all the indirect sets of data on all archs. Link: http://lkml.kernel.org/r/1393268401-24379-3-git-send-email-jslaby@suse.cz Cc: linux-arch@vger.kernel.org Signed-off-by: NJiri Slaby <jslaby@suse.cz> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 03 3月, 2014 1 次提交
-
-
由 Marc Zyngier 提交于
So far, KVM/ARM used a fixed HCR configuration per guest, except for the VI/VF/VA bits to control the interrupt in absence of VGIC. With the upcoming need to dynamically reconfigure trapping, it becomes necessary to allow the HCR to be changed on a per-vcpu basis. The fix here is to mimic what KVM/arm64 already does: a per vcpu HCR field, initialized at setup time. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 2月, 2014 1 次提交
-
-
由 Anurag Aggarwal 提交于
While unwinding backtrace, stack overflow is possible. This stack overflow can sometimes lead to data abort in system if the area after stack is not mapped to physical memory. To prevent this problem from happening, execute the instructions that can cause a data abort in separate helper functions, where a check for feasibility is made before reading each word from the stack. Signed-off-by: NAnurag Aggarwal <a.anurag@samsung.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-