- 01 10月, 2012 4 次提交
-
-
由 Richard Weinberger 提交于
This include is no longer needed. (seems to be a leftover from try_to_freeze()) Signed-off-by: NRichard Weinberger <richard@nod.at> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 27 9月, 2012 3 次提交
-
-
由 Matthew Leach 提交于
Ensure that the memory regions that are set within the segments correspond to physical contiguous memory regions. Reviewed-by: NSimon Horman <horms@verge.net.au> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMatthew Leach <matthew.leach@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Matthew Leach 提交于
This patch allows a dtb to be passed to a new kernel using the kexec mechinism. When loading segments from userspace, scan each segment's first four bytes for the dtb magic. If this is found set the kexec_boot_atags parameter to the relocate_kernel code to the phyical address of this segment. Reviewed-by: NSimon Horman <horms@verge.net.au> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMatthew Leach <matthew.leach@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Jonathan Austin 提交于
The current timer-based delay loop relies on the architected timer to initiate the switch away from the polling-based implementation. This is unfortunate for platforms without the architected timers but with a suitable delay source (that is, constant frequency, always powered-up and ticking as long as the CPUs are online). This patch introduces a registration mechanism for the delay timer (which provides an unconditional read_current_timer implementation) and updates the architected timer code to use the new interface. Reviewed-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 9月, 2012 2 次提交
-
-
由 Lorenzo Pieralisi 提交于
When a CPU is hotplugged out caches that reside in its power domain lose their contents and so must be cleaned to the next memory level. Currently, __cpu_disable calls flush_cache_all() that for new generation processor like A15/A7 ends up cleaning and invalidating all cache levels up to Level of Coherency, which includes the unified L2. This ends up being a waste of cycles since the L2 cache contents are not lost on power down. This patch updates __cpu_disable to use the new LoUIS API cache operations. Acked-by: NNicolas Pitre <nico@linaro.org> Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NShawn Guo <shawn.guo@linaro.org>
-
由 Lorenzo Pieralisi 提交于
In processors like A15/A7 L2 cache is unified and integrated within the processor cache hierarchy, so that it is not considered an outer cache anymore. For processors like A15/A7 flush_cache_all() ends up cleaning all cache levels up to Level of Coherency (LoC) that includes the L2 unified cache. When a single CPU is suspended (CPU idle) a complete L2 clean is not required, so generic cpu_suspend code must clean the data cache using the newly introduced cache LoUIS function. The context and stack pointer (context pointer) are cleaned to main memory using cache area functions that operate on MVA and guarantee that the data is written back to main memory (perform cache cleaning up to the Point of Coherency - PoC) so that the processor can fetch the context when the MMU is off in the cpu_resume code path. outer_cache management remains unchanged. Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NShawn Guo <shawn.guo@linaro.org>
-
- 22 9月, 2012 1 次提交
-
-
由 Russell King 提交于
kcmp has appeared on x86, but has not been noticed because checksyscalls.sh is broken at the moment. Reserve ARM syscall 378 for this should we ever need it, and add an __IGNORE entry for this unimplemented syscall. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 9月, 2012 5 次提交
-
-
由 Al Viro 提交于
most of the architectures don't and there's not a single caller outside of core kernel. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Mike Turquette 提交于
Running cpufreq driver on imx6q, the following warning is seen. $ BUG: sleeping function called from invalid context at kernel/mutex.c:269 <snip> stack backtrace: Backtrace: [<80011d64>] (dump_backtrace+0x0/0x10c) from [<803fc164>] (dump_stack+0x18/0x1c) r6:bf8142e0 r5:bf814000 r4:806ac794 r3:bf814000 [<803fc14c>] (dump_stack+0x0/0x1c) from [<803fd444>] (print_usage_bug+0x250/0x2b 8) [<803fd1f4>] (print_usage_bug+0x0/0x2b8) from [<80060f90>] (mark_lock+0x56c/0x67 0) [<80060a24>] (mark_lock+0x0/0x670) from [<80061a20>] (__lock_acquire+0x98c/0x19b 4) [<80061094>] (__lock_acquire+0x0/0x19b4) from [<80062f14>] (lock_acquire+0x68/0x 7c) [<80062eac>] (lock_acquire+0x0/0x7c) from [<80400f28>] (mutex_lock_nested+0x78/0 x344) r7:00000000 r6:bf872000 r5:805cc858 r4:805c2a04 [<80400eb0>] (mutex_lock_nested+0x0/0x344) from [<803089ac>] (clk_get_rate+0x1c/ 0x58) [<80308990>] (clk_get_rate+0x0/0x58) from [<80013c48>] (twd_update_frequency+0x1 8/0x50) r5:bf253d04 r4:805cadf4 [<80013c30>] (twd_update_frequency+0x0/0x50) from [<80068e20>] (generic_smp_call _function_single_interrupt+0xd4/0x13c) r4:bf873ee0 r3:80013c30 [<80068d4c>] (generic_smp_call_function_single_interrupt+0x0/0x13c) from [<80013 34c>] (handle_IPI+0xc0/0x194) r8:00000001 r7:00000000 r6:80574e48 r5:bf872000 r4:80593958 [<8001328c>] (handle_IPI+0x0/0x194) from [<800084e8>] (gic_handle_irq+0x58/0x60) r8:00000000 r7:bf873f8c r6:bf873f58 r5:80593070 r4:f4000100 r3:00000005 [<80008490>] (gic_handle_irq+0x0/0x60) from [<8000e124>] (__irq_svc+0x44/0x60) Exception stack(0xbf873f58 to 0xbf873fa0) 3f40: 00000001 00000001 3f60: 00000000 bf814000 bf872000 805cab48 80405aa4 80597648 00000000 412fc09a 3f80: bf872000 bf873fac bf873f70 bf873fa0 80063844 8000f1f8 20000013 ffffffff r6:ffffffff r5:20000013 r4:8000f1f8 r3:bf814000 [<8000f1b8>] (default_idle+0x0/0x4c) from [<8000f428>] (cpu_idle+0x98/0x114) [<8000f390>] (cpu_idle+0x0/0x114) from [<803f9834>] (secondary_start_kernel+0x11 c/0x140) [<803f9718>] (secondary_start_kernel+0x0/0x140) from [<103f9234>] (0x103f9234) r6:10c03c7d r5:0000001f r4:4f86806a r3:803f921c It looks that the warning is caused by that twd_update_frequency() gets called from an atomic context while it calls clk_get_rate() where a mutex gets held. To fix the warning, let's convert common clk users over to clk notifiers in place of CPUfreq notifiers. This works out nicely for Cortex-A9 MPcore designs that scale all CPUs at the same frequency. Platforms that have not been converted to the common clk framework and support CPUfreq will rely on the old mechanism. Once these platforms are converted over fully then we can remove the CPUfreq-specific bits for good. Signed-off-by: NMike Turquette <mturquette@linaro.org> Signed-off-by: NShawn Guo <shawn.guo@linaro.org> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Stephen Boyd 提交于
Remove the offset from ipi_msg_type and assume that SGI0 is the wakeup interrupt now that all WFI hotplug users call gic_raise_softirq() with 0 instead of 1. This allows us to track how many wakeup interrupts are sent and also removes the unknown IPI printk message for WFI hotplug based systems. Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
When tracing system calls, a debugger may change the syscall number in response to a SIGTRAP on syscall entry. This patch ensures that the new syscall number is passed to the audit code. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Wade Farnsworth 提交于
As specified by ftrace-design.txt, TIF_SYSCALL_TRACEPOINT was added, as well as NR_syscalls in asm/unistd.h. Additionally, __sys_trace was modified to call trace_sys_enter and trace_sys_exit when appropriate. Tests #2 - #4 of "perf test" now complete successfully. Signed-off-by: NSteven Walter <stevenrwalter@gmail.com> Signed-off-by: NWade Farnsworth <wade_farnsworth@mentor.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 9月, 2012 6 次提交
-
-
由 Marc Zyngier 提交于
If booting in HYP mode, it makes sense to enable the use of the physical timers, so the kernel can use them directly. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
In order to easily detect pathological cases, print some diagnostics when the kernel boots. This also provides helpers to detect that HYP mode is actually available, which can be used by other subsystems to enable HYP specific features. Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
The zImage loader needs to turn on the MMU in order to take advantage of caching while decompressing the zImage. Running this in hyp mode would require the LPAE pagetable format to be supported; to avoid this complexity, this patch switches out of hyp mode, and returns back to hyp mode just before booting the kernel. This implementation assumes that the Hyp mode view of memory and the PL1 view of memory are coherent, providing that the MMU and caches are off in both, as required by the boot protocol. The zImage decompression code must drain the write buffer on completion anyway, and entry into Hyp mode should flush any prefetch buffer, avoiding hazards associated with local write buffers and the pipeline. Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dave Martin 提交于
This patch does two things: * Ensure that asynchronous aborts are masked at kernel entry. The bootloader should be masking these anyway, but this reduces the damage window just in case it doesn't. * Enter svc mode via exception return to ensure that CPU state is properly serialised. This does not matter when switching from an ordinary privileged mode ("PL1" modes in ARMv7-AR rev C parlance), but it potentially does matter when switching from a another privileged mode such as hyp mode. This should allow the kernel to boot safely either from svc mode or hyp mode, even if no support for use of the ARM Virtualization Extensions is built into the kernel. Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Thierry Reding 提交于
Most architectures implement this in exactly the same way. Instead of having each architecture duplicate this function, provide a single implementation in the core and make it a weak symbol so that it can be overridden on architectures where it is required. Signed-off-by: NThierry Reding <thierry.reding@avionic-design.de> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
由 Thierry Reding 提交于
Remove the __init annotations in order to keep pci_fixup_irqs() around after init (e.g. for hotplug). This requires the same change for the implementation of pcibios_update_irq() on all architectures. While at it, all __devinit annotations are removed as well, since they will be useless now that HOTPLUG is always on. Signed-off-by: NThierry Reding <thierry.reding@avionic-design.de> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 16 9月, 2012 3 次提交
-
-
由 Marc Zyngier 提交于
Some subsystems (KVM for example) need access to a cycle counter. In the KVM case, this is used to measure the time delta between host and guest in order to accurately generate timer events for the guest. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Marc Zyngier 提交于
At the moment, the arch_timer driver only uses the physical timer, which can cause problem if PL2 hasn't enabled PL1 access in CNTHCTL, which is likely in a virtualized environment. Instead, the virtual timer is always available. This patch enables the use of the virtual timer, unless no interrupt is provided in the DT for it, in which case it falls back to the physical timer. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Add support for irq time accounting. This commit prepares ARM by adding the call to enable_sched_clock_irqtime() in sched_clock(). We introduce a new kernel parameter - irqtime - which takes an integer. -1 for auto, 0 for disabled, and 1 for enabled. Auto mode selects IRQ accounting if we have a sched_clock() tick rate greater than 1MHz. Frederic Weisbecker is working on a patch set which moves the IRQ_TIME_ACCOUNTING into arch/, so that part is not incorporated into this patch; this facility becomes available on ARM only when both this patch and Frederic's patches are merged. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 9月, 2012 3 次提交
-
-
由 Rob Herring 提交于
Based on suggestion by Russell King, create a common location for debug macros and select the included debug macro file using config option. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk>
-
由 Marc Zyngier 提交于
Almost each SMP platform defines pen_release to manage booting secondary CPUs. This of course clashes with the single zImage effort. Add the pen_release definition to the ARM SMP code, and remove all others. This should only be used by platforms which lack any kind of CPU power management... Reported-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Marc Zyngier 提交于
Now that all SMP platforms have been converted to use struct smp_operations, remove the "weak" attribute from the hooks in smp.c, and make the functions static wherever possible. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 13 9月, 2012 1 次提交
-
-
由 Marc Zyngier 提交于
This adds a 'struct smp_operations' to abstract the CPU initialization and hot plugging functions on SMP systems, which otherwise conflict in a multiplatform kernel. This also helps shmobile and potentially others that have more than one method to do these. To allow the kernel to continue building, the platform hooks are defined as weak symbols which are overrided by the platform code. Once all platforms are converted, the "weak" attribute will be removed and the function made static. Unlike the original version from Marc, this new version from Arnd does not use a generalized abstraction for per-soc data structures but only tries to solve the problem for the SMP operations. This way, we can collapse the previous four data structures into a single struct, which is less systematic but also easier to follow as a causal reader. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NNicolas Pitre <nico@fluxnic.net> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 10 9月, 2012 1 次提交
-
-
由 Richard Zhao 提交于
If CONFIG_SMP, cpufreq skips loops_per_jiffy update, because different arch has different per-cpu loops_per_jiffy definition. Signed-off-by: NRichard Zhao <richard.zhao@linaro.org> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NShawn Guo <shawn.guo@linaro.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
- 08 9月, 2012 1 次提交
-
-
由 Will Deacon 提交于
get_user may fail to load from the provided __user address due to an unhandled fault generated by the access. In the case of the undefined instruction trap, this results in failure to load the faulting instruction, in which case we should send SIGILL to the task rather than continue with potentially uninitialised data. Signed-off-by: NWill Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 9月, 2012 2 次提交
-
-
由 Nicolas Pitre 提交于
Now that ATAGS support is well contained, we can easily remove it from the kernel build if so desired. It has to explicitly be disabled, and only when DT support is selected. Note: disabling kernel ATAGS support does not prevent the usage of CONFIG_ARM_ATAG_DTB_COMPAT. Signed-off-by: NNicolas Pitre <nico@linaro.org> Tested-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
Make ATAGS parsing into a source file of its own, namely atags_parse.c. Also rename compat.c to atags_compat.c to make it clearer what it is about. Same for atags.c which is now atags_proc.c. Gather all the atags function declarations into a common atags.h. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Tested-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 8月, 2012 1 次提交
-
-
由 Russell King 提交于
There is no point reserving space at the bottom of the kernel stack for per-thread crunch state, and per-thread VFP state if these are not being supported by the kernel being built. Remove these members from the thread union when these features are disabled. Reported-by: NTim Bird <tim.bird@am.sony.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 8月, 2012 2 次提交
-
-
由 Will Deacon 提交于
Breakpoint validation currently fails for single-byte watchpoints on addresses ending in 11b. There is no reason to forbid such a watchpoint, so extend the validation code to allow it. Cc: Ulrich Weigand <Ulrich.Weigand@de.ibm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
From ARM debug architecture v7.1 onwards, a watchpoint exception causes the DFAR to be updated with the faulting data address. However, DFSR.WnR takes an UNKNOWN value and therefore cannot be used in general to determine the access type that triggered the watchpoint. This patch forbids watchpoints without an overflow handler from specifying a specific access type (load/store). Those with overflow handlers must be able to handle false positives potentially triggered by a watchpoint of a different access type on the same address. For SIGTRAP-based handlers (i.e. ptrace), this should have no impact. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 8月, 2012 5 次提交
-
-
由 Sudeep KarkadaNagesha 提交于
This patch moves the CPU-specific IRQ registration and parsing code into the CPU PMU backend. This is required because a PMU may have more than one interrupt, which in turn can be either PPI (per-cpu) or SPI (requiring strict affinity setting at the interrupt distributor). Signed-off-by: NSudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> [will: cosmetic edits and reworked interrupt dispatching] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
This patch moves the CPU-specific PMU handling code out of perf_event.c and into perf_event_cpu.c. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The CPU PMU code is tightly coupled with generic ARM PMU handling code. This makes it cumbersome when trying to add support for other ARM PMUs (e.g. interconnect, L2 cache controller, bus) as the generic parts of the code are not readily reusable. This patch cleans up perf_event.c so that reusable code is exposed via header files to other potential PMU drivers. The CPU code is consistently named to identify it as such and also to prepare for moving it into a separate file. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The CPU PMU is probed using the current cpuid information as part of the early_initcall initialising the architecture perf backend. For architectures without NMI (such as ARM), this does not need to be performed early and can be deferred to the driver probe callback. This also allows us to probe the devicetree in preference to parsing the current cpuid, which may be invalid on a big.LITTLE multi-cluster system. This patch defers the PMU probing and uses the devicetree information when available. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
There's a rather strange compiler barrier in the PMU disabling code which was presumably placed there by aliens. There's no valid reason for the barrier and one can only suspect that it's up to no good. This patch removes it before it has a chance to spread. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-